David Ferrucci: IBM Watson, Jeopardy & Deep Conversations with AI #44

Transcript

00:00:00 The following is a conversation with David Feroci.

00:00:03 He led the team that built Watson,

00:00:05 the IBM question answering system

00:00:07 that beat the top humans in the world

00:00:09 at the game of Jeopardy.

00:00:11 For spending a couple hours with David,

00:00:12 I saw a genuine passion,

00:00:14 not only for abstract understanding of intelligence,

00:00:17 but for engineering it to solve real world problems

00:00:21 under real world deadlines and resource constraints.

00:00:24 Where science meets engineering

00:00:26 is where brilliant, simple ingenuity emerges.

00:00:29 People who work adjoining it to

00:00:32 have a lot of wisdom earned

00:00:33 through failures and eventual success.

00:00:36 David is also the founder, CEO,

00:00:39 and chief scientist of Elemental Cognition,

00:00:41 a company working to engineer AI systems

00:00:44 that understand the world the way people do.

00:00:47 This is the Artificial Intelligence Podcast.

00:00:50 If you enjoy it, subscribe on YouTube,

00:00:52 give it five stars on iTunes,

00:00:54 support it on Patreon,

00:00:55 or simply connect with me on Twitter

00:00:57 at Lex Friedman, spelled F R I D M A N.

00:01:01 And now, here’s my conversation with David Ferrucci.

00:01:06 Your undergrad was in biology

00:01:07 with an eye toward medical school

00:01:11 before you went on for the PhD in computer science.

00:01:14 So let me ask you an easy question.

00:01:16 What is the difference between biological systems

00:01:20 and computer systems?

00:01:22 In your, when you sit back,

00:01:25 look at the stars, and think philosophically.

00:01:28 I often wonder whether or not

00:01:30 there is a substantive difference.

00:01:32 I mean, I think the thing that got me

00:01:34 into computer science and into artificial intelligence

00:01:37 was exactly this presupposition

00:01:39 that if we can get machines to think,

00:01:44 or I should say this question,

00:01:45 this philosophical question,

00:01:47 if we can get machines to think,

00:01:50 to understand, to process information the way we do,

00:01:54 so if we can describe a procedure, describe a process,

00:01:57 even if that process were the intelligence process itself,

00:02:02 then what would be the difference?

00:02:05 So from a philosophical standpoint,

00:02:07 I’m not sure I’m convinced that there is.

00:02:11 I mean, you can go in the direction of spirituality,

00:02:14 you can go in the direction of the soul,

00:02:16 but in terms of what we can experience

00:02:21 from an intellectual and physical perspective,

00:02:25 I’m not sure there is.

00:02:27 Clearly, there are different implementations,

00:02:31 but if you were to say,

00:02:33 is a biological information processing system

00:02:36 fundamentally more capable

00:02:38 than one we might be able to build out of silicon

00:02:41 or some other substrate,

00:02:44 I don’t know that there is.

00:02:46 How distant do you think is the biological implementation?

00:02:50 So fundamentally, they may have the same capabilities,

00:02:53 but is it really a far mystery

00:02:58 where a huge number of breakthroughs are needed

00:03:00 to be able to understand it,

00:03:02 or is it something that, for the most part,

00:03:06 in the important aspects,

00:03:08 echoes of the same kind of characteristics?

00:03:11 Yeah, that’s interesting.

00:03:12 I mean, so your question presupposes

00:03:15 that there’s this goal to recreate

00:03:17 what we perceive as biological intelligence.

00:03:20 I’m not sure that’s the,

00:03:24 I’m not sure that’s how I would state the goal.

00:03:26 I mean, I think that studying.

00:03:27 What is the goal?

00:03:29 Good, so I think there are a few goals.

00:03:32 I think that understanding the human brain

00:03:35 and how it works is important

00:03:38 for us to be able to diagnose and treat issues,

00:03:43 treat issues for us to understand our own strengths

00:03:47 and weaknesses, both intellectual,

00:03:51 psychological, and physical.

00:03:52 So neuroscience and understanding the brain,

00:03:55 from that perspective, there’s a clear goal there.

00:03:59 From the perspective of saying,

00:04:00 I wanna mimic human intelligence,

00:04:04 that one’s a little bit more interesting.

00:04:06 Human intelligence certainly has a lot of things we envy.

00:04:10 It’s also got a lot of problems, too.

00:04:12 So I think we’re capable of sort of stepping back

00:04:16 and saying, what do we want out of an intelligence?

00:04:22 How do we wanna communicate with that intelligence?

00:04:24 How do we want it to behave?

00:04:25 How do we want it to perform?

00:04:27 Now, of course, it’s somewhat of an interesting argument

00:04:30 because I’m sitting here as a human

00:04:32 with a biological brain,

00:04:33 and I’m critiquing the strengths and weaknesses

00:04:36 of human intelligence and saying

00:04:38 that we have the capacity to step back

00:04:42 and say, gee, what is intelligence

00:04:44 and what do we really want out of it?

00:04:46 And that, in and of itself, suggests that

00:04:50 human intelligence is something quite enviable,

00:04:52 that it can introspect that way.

00:04:58 And the flaws, you mentioned the flaws.

00:05:00 Humans have flaws.

00:05:01 Yeah, but I think that flaws that human intelligence has

00:05:04 is extremely prejudicial and biased

00:05:08 in the way it draws many inferences.

00:05:10 Do you think those are, sorry to interrupt,

00:05:12 do you think those are features or are those bugs?

00:05:14 Do you think the prejudice, the forgetfulness, the fear,

00:05:21 what are the flaws?

00:05:22 List them all.

00:05:23 What, love?

00:05:24 Maybe that’s a flaw.

00:05:25 You think those are all things that can be gotten,

00:05:28 get in the way of intelligence

00:05:30 or the essential components of intelligence?

00:05:33 Well, again, if you go back and you define intelligence

00:05:36 as being able to sort of accurately, precisely, rigorously,

00:05:42 reason, develop answers,

00:05:43 and justify those answers in an objective way,

00:05:46 yeah, then human intelligence has these flaws

00:05:49 in that it tends to be more influenced

00:05:52 by some of the things you said.

00:05:56 And it’s largely an inductive process,

00:05:59 meaning it takes past data,

00:06:01 uses that to predict the future.

00:06:03 Very advantageous in some cases,

00:06:06 but fundamentally biased and prejudicial in other cases

00:06:09 because it’s gonna be strongly influenced by its priors,

00:06:11 whether they’re right or wrong

00:06:13 from some objective reasoning perspective,

00:06:17 you’re gonna favor them because those are the decisions

00:06:20 or those are the paths that succeeded in the past.

00:06:24 And I think that mode of intelligence makes a lot of sense

00:06:29 for when your primary goal is to act quickly

00:06:33 and survive and make fast decisions.

00:06:37 And I think those create problems

00:06:40 when you wanna think more deeply

00:06:42 and make more objective and reasoned decisions.

00:06:45 Of course, humans capable of doing both.

00:06:48 They do sort of one more naturally than they do the other,

00:06:51 but they’re capable of doing both.

00:06:53 You’re saying they do the one

00:06:54 that responds quickly more naturally.

00:06:56 Right.

00:06:57 Because that’s the thing we kind of need

00:06:58 to not be eaten by the predators in the world.

00:07:02 For example, but then we’ve learned to reason through logic,

00:07:09 we’ve developed science, we train people to do that.

00:07:13 I think that’s harder for the individual to do.

00:07:16 I think it requires training and teaching.

00:07:20 I think we are, the human mind certainly is capable of it,

00:07:24 but we find it more difficult.

00:07:25 And then there are other weaknesses, if you will,

00:07:27 as you mentioned earlier, just memory capacity

00:07:30 and how many chains of inference

00:07:33 can you actually go through without like losing your way?

00:07:37 So just focus and…

00:07:40 So the way you think about intelligence,

00:07:43 and we’re really sort of floating

00:07:45 in this philosophical space,

00:07:47 but I think you’re like the perfect person

00:07:50 to talk about this,

00:07:52 because we’ll get to Jeopardy and beyond.

00:07:55 That’s like one of the most incredible accomplishments

00:07:58 in AI, in the history of AI,

00:08:00 but hence the philosophical discussion.

00:08:03 So let me ask, you’ve kind of alluded to it,

00:08:06 but let me ask again, what is intelligence?

00:08:09 Underlying the discussions we’ll have

00:08:12 with Jeopardy and beyond,

00:08:15 how do you think about intelligence?

00:08:17 Is it a sufficiently complicated problem

00:08:19 being able to reason your way through solving that problem?

00:08:22 Is that kind of how you think about

00:08:23 what it means to be intelligent?

00:08:25 So I think of intelligence primarily two ways.

00:08:29 One is the ability to predict.

00:08:33 So in other words, if I have a problem,

00:08:35 can I predict what’s gonna happen next?

00:08:37 Whether it’s to predict the answer of a question

00:08:40 or to say, look, I’m looking at all the market dynamics

00:08:43 and I’m gonna tell you what’s gonna happen next,

00:08:46 or you’re in a room and somebody walks in

00:08:49 and you’re gonna predict what they’re gonna do next

00:08:51 or what they’re gonna say next.

00:08:53 You’re in a highly dynamic environment

00:08:55 full of uncertainty, be able to predict.

00:08:58 The more variables, the more complex.

00:09:01 The more possibilities, the more complex.

00:09:04 But can I take a small amount of prior data

00:09:07 and learn the pattern and then predict

00:09:09 what’s gonna happen next accurately and consistently?

00:09:13 That’s certainly a form of intelligence.

00:09:16 What do you need for that, by the way?

00:09:18 You need to have an understanding

00:09:21 of the way the world works

00:09:22 in order to be able to unroll it into the future, right?

00:09:26 What do you think is needed to predict?

00:09:28 Depends what you mean by understanding.

00:09:29 I need to be able to find that function.

00:09:32 This is very much what deep learning does,

00:09:35 machine learning does, is if you give me enough prior data

00:09:38 and you tell me what the output variable is that matters,

00:09:41 I’m gonna sit there and be able to predict it.

00:09:44 And if I can predict it accurately

00:09:47 so that I can get it right more often than not,

00:09:50 I’m smart, if I can do that with less data

00:09:52 and less training time, I’m even smarter.

00:09:58 If I can figure out what’s even worth predicting,

00:10:01 I’m smarter, meaning I’m figuring out

00:10:03 what path is gonna get me toward a goal.

00:10:06 What about picking a goal?

00:10:07 Sorry, you left again.

00:10:08 Well, that’s interesting about picking a goal,

00:10:10 sort of an interesting thing.

00:10:11 I think that’s where you bring in

00:10:13 what are you preprogrammed to do?

00:10:15 We talk about humans,

00:10:16 and well, humans are preprogrammed to survive.

00:10:19 So it’s sort of their primary driving goal.

00:10:23 What do they have to do to do that?

00:10:24 And that can be very complex, right?

00:10:27 So it’s not just figuring out that you need to run away

00:10:31 from the ferocious tiger,

00:10:33 but we survive in a social context as an example.

00:10:38 So understanding the subtleties of social dynamics

00:10:42 becomes something that’s important for surviving,

00:10:45 finding a mate, reproducing, right?

00:10:47 So we’re continually challenged with

00:10:50 complex sets of variables, complex constraints,

00:10:53 rules, if you will, or patterns.

00:10:56 And we learn how to find the functions

00:10:59 and predict the things.

00:11:00 In other words, represent those patterns efficiently

00:11:03 and be able to predict what’s gonna happen.

00:11:04 And that’s a form of intelligence.

00:11:06 That doesn’t really require anything specific

00:11:11 other than the ability to find that function

00:11:13 and predict that right answer.

00:11:15 That’s certainly a form of intelligence.

00:11:18 But then when we say, well, do we understand each other?

00:11:23 In other words, would you perceive me as intelligent

00:11:28 beyond that ability to predict?

00:11:31 So now I can predict, but I can’t really articulate

00:11:35 how I’m going through that process,

00:11:37 what my underlying theory is for predicting,

00:11:41 and I can’t get you to understand what I’m doing

00:11:43 so that you can figure out how to do this yourself

00:11:48 if you did not have, for example,

00:11:50 the right pattern matching machinery that I did.

00:11:53 And now we potentially have this breakdown

00:11:55 where, in effect, I’m intelligent,

00:11:59 but I’m sort of an alien intelligence relative to you.

00:12:02 You’re intelligent, but nobody knows about it, or I can’t.

00:12:05 Well, I can see the output.

00:12:08 So you’re saying, let’s sort of separate the two things.

00:12:11 One is you explaining why you were able

00:12:15 to predict the future,

00:12:19 and the second is me being able to,

00:12:23 impressing me that you’re intelligent,

00:12:25 me being able to know

00:12:26 that you successfully predicted the future.

00:12:28 Do you think that’s?

00:12:29 Well, it’s not impressing you that I’m intelligent.

00:12:31 In other words, you may be convinced

00:12:33 that I’m intelligent in some form.

00:12:35 So how, what would convince?

00:12:37 Because of my ability to predict.

00:12:38 So I would look at the metrics.

00:12:39 When you can’t, I’d say, wow.

00:12:41 You’re right more times than I am.

00:12:44 You’re doing something interesting.

00:12:46 That’s a form of intelligence.

00:12:49 But then what happens is, if I say, how are you doing that?

00:12:53 And you can’t communicate with me,

00:12:55 and you can’t describe that to me,

00:12:57 now I may label you a savant.

00:13:00 I may say, well, you’re doing something weird,

00:13:03 and it’s just not very interesting to me,

00:13:06 because you and I can’t really communicate.

00:13:09 And so now, so this is interesting, right?

00:13:12 Because now this is, you’re in this weird place

00:13:15 where for you to be recognized

00:13:17 as intelligent the way I’m intelligent,

00:13:21 then you and I sort of have to be able to communicate.

00:13:24 And then my, we start to understand each other,

00:13:28 and then my respect and my appreciation,

00:13:33 my ability to relate to you starts to change.

00:13:36 So now you’re not an alien intelligence anymore.

00:13:39 You’re a human intelligence now,

00:13:41 because you and I can communicate.

00:13:43 And so I think when we look at animals, for example,

00:13:48 animals can do things we can’t quite comprehend,

00:13:50 we don’t quite know how they do them,

00:13:51 but they can’t really communicate with us.

00:13:54 They can’t put what they’re going through in our terms.

00:13:58 And so we think of them as sort of,

00:13:59 well, they’re these alien intelligences,

00:14:01 and they’re not really worth necessarily what we’re worth.

00:14:03 We don’t treat them the same way as a result of that.

00:14:06 But it’s hard because who knows what’s going on.

00:14:11 So just a quick elaboration on that,

00:14:15 the explaining that you’re intelligent,

00:14:18 the explaining the reasoning that went into the prediction

00:14:23 is not some kind of mathematical proof.

00:14:27 If we look at humans,

00:14:28 look at political debates and discourse on Twitter,

00:14:32 it’s mostly just telling stories.

00:14:35 So your task is, sorry,

00:14:38 your task is not to tell an accurate depiction

00:14:43 of how you reason, but to tell a story, real or not,

00:14:48 that convinces me that there was a mechanism by which you.

00:14:52 Ultimately, that’s what a proof is.

00:14:53 I mean, even a mathematical proof is that.

00:14:56 Because ultimately, the other mathematicians

00:14:58 have to be convinced by your proof.

00:15:01 Otherwise, in fact, there have been.

00:15:03 That’s the metric for success, yeah.

00:15:04 There have been several proofs out there

00:15:06 where mathematicians would study for a long time

00:15:08 before they were convinced

00:15:08 that it actually proved anything, right?

00:15:10 You never know if it proved anything

00:15:12 until the community of mathematicians decided that it did.

00:15:14 So I mean, but it’s a real thing, right?

00:15:18 And that’s sort of the point, right?

00:15:20 Is that ultimately, this notion of understanding us,

00:15:24 understanding something is ultimately a social concept.

00:15:28 In other words, I have to convince enough people

00:15:30 that I did this in a reasonable way.

00:15:33 I did this in a way that other people can understand

00:15:36 and replicate and that it makes sense to them.

00:15:39 So human intelligence is bound together in that way.

00:15:44 We’re bound up in that sense.

00:15:47 We sort of never really get away with it

00:15:49 until we can sort of convince others

00:15:52 that our thinking process makes sense.

00:15:55 Did you think the general question of intelligence

00:15:59 is then also a social construct?

00:16:01 So if we ask questions of an artificial intelligence system,

00:16:06 is this system intelligent?

00:16:08 The answer will ultimately be a socially constructed.

00:16:12 I think, so I think I’m making two statements.

00:16:16 I’m saying we can try to define intelligence

00:16:18 in this super objective way that says, here’s this data.

00:16:23 I wanna predict this type of thing, learn this function.

00:16:26 And then if you get it right, often enough,

00:16:30 we consider you intelligent.

00:16:32 But that’s more like a sub bond.

00:16:34 I think it is.

00:16:35 It doesn’t mean it’s not useful.

00:16:37 It could be incredibly useful.

00:16:38 It could be solving a problem we can’t otherwise solve

00:16:41 and can solve it more reliably than we can.

00:16:44 But then there’s this notion of,

00:16:46 can humans take responsibility

00:16:50 for the decision that you’re making?

00:16:53 Can we make those decisions ourselves?

00:16:56 Can we relate to the process that you’re going through?

00:16:58 And now you as an agent,

00:17:01 whether you’re a machine or another human, frankly,

00:17:04 are now obliged to make me understand

00:17:08 how it is that you’re arriving at that answer

00:17:10 and allow me, me or obviously a community

00:17:13 or a judge of people to decide

00:17:15 whether or not that makes sense.

00:17:17 And by the way, that happens with the humans as well.

00:17:20 You’re sitting down with your staff, for example,

00:17:22 and you ask for suggestions about what to do next.

00:17:26 And someone says, oh, I think you should buy.

00:17:28 And I actually think you should buy this much

00:17:30 or whatever or sell or whatever it is.

00:17:33 Or I think you should launch the product today or tomorrow

00:17:35 or launch this product versus that product,

00:17:37 whatever the decision may be.

00:17:38 And you ask why.

00:17:39 And the person says,

00:17:40 I just have a good feeling about it.

00:17:42 And you’re not very satisfied.

00:17:44 Now, that person could be,

00:17:47 you might say, well, you’ve been right before,

00:17:50 but I’m gonna put the company on the line.

00:17:54 Can you explain to me why I should believe this?

00:17:56 Right.

00:17:58 And that explanation may have nothing to do with the truth.

00:18:00 You just, the ultimate.

00:18:01 It’s gotta convince the other person.

00:18:03 Still be wrong, still be wrong.

00:18:05 She’s gotta be convincing.

00:18:06 But it’s ultimately gotta be convincing.

00:18:07 And that’s why I’m saying it’s,

00:18:10 we’re bound together, right?

00:18:12 Our intelligences are bound together in that sense.

00:18:14 We have to understand each other.

00:18:15 And if, for example, you’re giving me an explanation,

00:18:18 I mean, this is a very important point, right?

00:18:21 You’re giving me an explanation,

00:18:23 and I’m not good,

00:18:29 and then I’m not good at reasoning well,

00:18:33 and being objective,

00:18:35 and following logical paths and consistent paths,

00:18:39 and I’m not good at measuring

00:18:41 and sort of computing probabilities across those paths.

00:18:45 What happens is collectively,

00:18:47 we’re not gonna do well.

00:18:50 How hard is that problem?

00:18:52 The second one.

00:18:53 So I think we’ll talk quite a bit about the first

00:18:57 on a specific objective metric benchmark performing well.

00:19:03 But being able to explain the steps,

00:19:07 the reasoning, how hard is that problem?

00:19:10 I think that’s very hard.

00:19:11 I mean, I think that that’s,

00:19:16 well, it’s hard for humans.

00:19:18 The thing that’s hard for humans, as you know,

00:19:20 may not necessarily be hard for computers

00:19:22 and vice versa.

00:19:24 So, sorry, so how hard is that problem for computers?

00:19:31 I think it’s hard for computers,

00:19:32 and the reason why I related to,

00:19:34 or saying that it’s also hard for humans

00:19:36 is because I think when we step back

00:19:38 and we say we wanna design computers to do that,

00:19:43 one of the things we have to recognize

00:19:46 is we’re not sure how to do it well.

00:19:50 I’m not sure we have a recipe for that.

00:19:52 And even if you wanted to learn it,

00:19:55 it’s not clear exactly what data we use

00:19:59 and what judgments we use to learn that well.

00:20:03 And so what I mean by that is

00:20:05 if you look at the entire enterprise of science,

00:20:09 science is supposed to be at about

00:20:11 objective reason and reason, right?

00:20:13 So we think about, gee, who’s the most intelligent person

00:20:17 or group of people in the world?

00:20:20 Do we think about the savants who can close their eyes

00:20:24 and give you a number?

00:20:25 We think about the think tanks,

00:20:27 or the scientists or the philosophers

00:20:29 who kind of work through the details

00:20:32 and write the papers and come up with the thoughtful,

00:20:35 logical proofs and use the scientific method.

00:20:39 I think it’s the latter.

00:20:42 And my point is that how do you train someone to do that?

00:20:45 And that’s what I mean by it’s hard.

00:20:46 How do you, what’s the process of training people

00:20:49 to do that well?

00:20:50 That’s a hard process.

00:20:52 We work, as a society, we work pretty hard

00:20:56 to get other people to understand our thinking

00:20:59 and to convince them of things.

00:21:02 Now we could persuade them,

00:21:04 obviously you talked about this,

00:21:05 like human flaws or weaknesses,

00:21:07 we can persuade them through emotional means.

00:21:12 But to get them to understand and connect to

00:21:16 and follow a logical argument is difficult.

00:21:19 We try it, we do it, we do it as scientists,

00:21:22 we try to do it as journalists,

00:21:24 we try to do it as even artists in many forms,

00:21:27 as writers, as teachers.

00:21:29 We go through a fairly significant training process

00:21:33 to do that.

00:21:34 And then we could ask, well, why is that so hard?

00:21:39 But it’s hard.

00:21:39 And for humans, it takes a lot of work.

00:21:44 And when we step back and say,

00:21:45 well, how do we get a machine to do that?

00:21:49 It’s a vexing question.

00:21:51 How would you begin to try to solve that?

00:21:55 And maybe just a quick pause,

00:21:57 because there’s an optimistic notion

00:21:59 in the things you’re describing,

00:22:01 which is being able to explain something through reason.

00:22:05 But if you look at algorithms that recommend things

00:22:08 that we’ll look at next, whether it’s Facebook, Google,

00:22:11 advertisement based companies, their goal is to convince you

00:22:18 to buy things based on anything.

00:22:23 So that could be reason,

00:22:25 because the best of advertisement is showing you things

00:22:28 that you really do need and explain why you need it.

00:22:31 But it could also be through emotional manipulation.

00:22:37 The algorithm that describes why a certain decision

00:22:41 was made, how hard is it to do it

00:22:45 through emotional manipulation?

00:22:48 And why is that a good or a bad thing?

00:22:52 So you’ve kind of focused on reason, logic,

00:22:56 really showing in a clear way why something is good.

00:23:02 One, is that even a thing that us humans do?

00:23:05 And two, how do you think of the difference

00:23:09 in the reasoning aspect and the emotional manipulation?

00:23:15 So you call it emotional manipulation,

00:23:17 but more objectively is essentially saying,

00:23:20 there are certain features of things

00:23:22 that seem to attract your attention.

00:23:24 I mean, it kind of give you more of that stuff.

00:23:26 Manipulation is a bad word.

00:23:28 Yeah, I mean, I’m not saying it’s good right or wrong.

00:23:31 It works to get your attention

00:23:32 and it works to get you to buy stuff.

00:23:34 And when you think about algorithms that look

00:23:36 at the patterns of features

00:23:40 that you seem to be spending your money on

00:23:41 and say, I’m gonna give you something

00:23:43 with a similar pattern.

00:23:44 So I’m gonna learn that function

00:23:46 because the objective is to get you to click on it

00:23:48 or get you to buy it or whatever it is.

00:23:51 I don’t know, I mean, it is what it is.

00:23:53 I mean, that’s what the algorithm does.

00:23:55 You can argue whether it’s good or bad.

00:23:57 It depends what your goal is.

00:24:00 I guess this seems to be very useful

00:24:02 for convincing, for telling a story.

00:24:05 For convincing humans, it’s good

00:24:07 because again, this goes back to what is the human behavior

00:24:11 like, how does the human brain respond to things?

00:24:17 I think there’s a more optimistic view of that too,

00:24:19 which is that if you’re searching

00:24:22 for certain kinds of things,

00:24:23 you’ve already reasoned that you need them.

00:24:26 And these algorithms are saying, look, that’s up to you

00:24:30 to reason whether you need something or not.

00:24:32 That’s your job.

00:24:33 You may have an unhealthy addiction to this stuff

00:24:36 or you may have a reasoned and thoughtful explanation

00:24:42 for why it’s important to you.

00:24:44 And the algorithms are saying, hey, that’s like, whatever.

00:24:47 Like, that’s your problem.

00:24:48 All I know is you’re buying stuff like that.

00:24:50 You’re interested in stuff like that.

00:24:51 Could be a bad reason, could be a good reason.

00:24:53 That’s up to you.

00:24:55 I’m gonna show you more of that stuff.

00:24:57 And I think that it’s not good or bad.

00:25:01 It’s not reasoned or not reasoned.

00:25:03 The algorithm is doing what it does,

00:25:04 which is saying, you seem to be interested in this.

00:25:06 I’m gonna show you more of that stuff.

00:25:09 And I think we’re seeing this not just in buying stuff,

00:25:11 but even in social media.

00:25:12 You’re reading this kind of stuff.

00:25:13 I’m not judging on whether it’s good or bad.

00:25:15 I’m not reasoning at all.

00:25:16 I’m just saying, I’m gonna show you other stuff

00:25:19 with similar features.

00:25:20 And like, and that’s it.

00:25:22 And I wash my hands from it and I say,

00:25:23 that’s all that’s going on.

00:25:25 You know, there is, people are so harsh on AI systems.

00:25:31 So one, the bar of performance is extremely high.

00:25:34 And yet we also ask them to, in the case of social media,

00:25:39 to help find the better angels of our nature

00:25:42 and help make a better society.

00:25:45 What do you think about the role of AI there?

00:25:47 So that, I agree with you.

00:25:48 That’s the interesting dichotomy, right?

00:25:51 Because on one hand, we’re sitting there

00:25:54 and we’re sort of doing the easy part,

00:25:55 which is finding the patterns.

00:25:57 We’re not building, the system’s not building a theory

00:26:01 that is consumable and understandable to other humans

00:26:04 that can be explained and justified.

00:26:06 And so on one hand to say, oh, you know, AI is doing this.

00:26:11 Why isn’t doing this other thing?

00:26:13 Well, this other thing’s a lot harder.

00:26:16 And it’s interesting to think about why it’s harder.

00:26:20 And because you’re interpreting the data

00:26:23 in the context of prior models.

00:26:26 In other words, understandings

00:26:28 of what’s important in the world, what’s not important.

00:26:30 What are all the other abstract features

00:26:32 that drive our decision making?

00:26:35 What’s sensible, what’s not sensible,

00:26:36 what’s good, what’s bad, what’s moral,

00:26:38 what’s valuable, what isn’t?

00:26:40 Where is that stuff?

00:26:41 No one’s applying the interpretation.

00:26:43 So when I see you clicking on a bunch of stuff

00:26:46 and I look at these simple features, the raw features,

00:26:49 the features that are there in the data,

00:26:51 like what words are being used or how long the material is

00:26:57 or other very superficial features,

00:27:00 what colors are being used in the material.

00:27:02 Like, I don’t know why you’re clicking

00:27:03 on this stuff you’re clicking.

00:27:04 Or if it’s products, what the price is

00:27:07 or what the categories and stuff like that.

00:27:09 And I just feed you more of the same stuff.

00:27:11 That’s very different than kind of getting in there

00:27:13 and saying, what does this mean?

00:27:16 The stuff you’re reading, like why are you reading it?

00:27:21 What assumptions are you bringing to the table?

00:27:23 Are those assumptions sensible?

00:27:26 Does the material make any sense?

00:27:28 Does it lead you to thoughtful, good conclusions?

00:27:34 Again, there’s interpretation and judgment involved

00:27:37 in that process that isn’t really happening in the AI today.

00:27:42 That’s harder because you have to start getting

00:27:47 at the meaning of the stuff, of the content.

00:27:52 You have to get at how humans interpret the content

00:27:55 relative to their value system

00:27:58 and deeper thought processes.

00:28:00 So that’s what meaning means is not just some kind

00:28:04 of deep, timeless, semantic thing

00:28:09 that the statement represents,

00:28:10 but also how a large number of people

00:28:13 are likely to interpret.

00:28:15 So that’s again, even meaning is a social construct.

00:28:19 So you have to try to predict how most people

00:28:22 would understand this kind of statement.

00:28:24 Yeah, meaning is often relative,

00:28:27 but meaning implies that the connections go beneath

00:28:30 the surface of the artifacts.

00:28:31 If I show you a painting, it’s a bunch of colors on a canvas,

00:28:35 what does it mean to you?

00:28:37 And it may mean different things to different people

00:28:39 because of their different experiences.

00:28:42 It may mean something even different

00:28:44 to the artist who painted it.

00:28:47 As we try to get more rigorous with our communication,

00:28:50 we try to really nail down that meaning.

00:28:53 So we go from abstract art to precise mathematics,

00:28:58 precise engineering drawings and things like that.

00:29:01 We’re really trying to say, I wanna narrow

00:29:04 that space of possible interpretations

00:29:08 because the precision of the communication

00:29:10 ends up becoming more and more important.

00:29:13 And so that means that I have to specify,

00:29:17 and I think that’s why this becomes really hard,

00:29:21 because if I’m just showing you an artifact

00:29:24 and you’re looking at it superficially,

00:29:26 whether it’s a bunch of words on a page,

00:29:28 or whether it’s brushstrokes on a canvas

00:29:31 or pixels on a photograph,

00:29:33 you can sit there and you can interpret

00:29:35 lots of different ways at many, many different levels.

00:29:38 But when I wanna align our understanding of that,

00:29:45 I have to specify a lot more stuff

00:29:48 that’s actually not directly in the artifact.

00:29:52 Now I have to say, well, how are you interpreting

00:29:55 this image and that image?

00:29:57 And what about the colors and what do they mean to you?

00:29:59 What perspective are you bringing to the table?

00:30:02 What are your prior experiences with those artifacts?

00:30:05 What are your fundamental assumptions and values?

00:30:08 What is your ability to kind of reason,

00:30:10 to chain together logical implication

00:30:13 as you’re sitting there and saying,

00:30:14 well, if this is the case, then I would conclude this.

00:30:16 And if that’s the case, then I would conclude that.

00:30:19 So your reasoning processes and how they work,

00:30:22 your prior models and what they are,

00:30:25 your values and your assumptions,

00:30:27 all those things now come together into the interpretation.

00:30:30 Getting in sync of that is hard.

00:30:34 And yet humans are able to intuit some of that

00:30:37 without any pre.

00:30:39 Because they have the shared experience.

00:30:41 And we’re not talking about shared,

00:30:42 two people having shared experience.

00:30:44 I mean, as a society.

00:30:45 That’s correct.

00:30:46 We have the shared experience and we have similar brains.

00:30:51 So we tend to, in other words,

00:30:54 part of our shared experiences are shared local experience.

00:30:56 Like we may live in the same culture,

00:30:57 we may live in the same society

00:30:59 and therefore we have similar educations.

00:31:02 We have some of what we like to call prior models

00:31:04 about the word prior experiences.

00:31:05 And we use that as a,

00:31:07 think of it as a wide collection of interrelated variables

00:31:10 and they’re all bound to similar things.

00:31:12 And so we take that as our background

00:31:15 and we start interpreting things similarly.

00:31:17 But as humans, we have a lot of shared experience.

00:31:21 We do have similar brains, similar goals,

00:31:24 similar emotions under similar circumstances.

00:31:28 Because we’re both humans.

00:31:29 So now one of the early questions you asked,

00:31:31 how is biological and computer information systems

00:31:37 fundamentally different?

00:31:37 Well, one is humans come with a lot of pre programmed stuff.

00:31:43 A ton of program stuff.

00:31:45 And they’re able to communicate

00:31:47 because they share that stuff.

00:31:50 Do you think that shared knowledge,

00:31:54 if we can maybe escape the hard work question,

00:31:57 how much is encoded in the hardware?

00:31:59 Just the shared knowledge in the software,

00:32:01 the history, the many centuries of wars and so on

00:32:05 that came to today, that shared knowledge.

00:32:09 How hard is it to encode?

00:32:14 Do you have a hope?

00:32:15 Can you speak to how hard is it to encode that knowledge

00:32:19 systematically in a way that could be used by a computer?

00:32:22 So I think it is possible to learn to,

00:32:25 for a machine to program a machine,

00:32:27 to acquire that knowledge with a similar foundation.

00:32:31 In other words, a similar interpretive foundation

00:32:36 for processing that knowledge.

00:32:38 What do you mean by that?

00:32:39 So in other words, we view the world in a particular way.

00:32:44 So in other words, we have a, if you will,

00:32:48 as humans, we have a framework

00:32:50 for interpreting the world around us.

00:32:52 So we have multiple frameworks for interpreting

00:32:55 the world around us.

00:32:56 But if you’re interpreting, for example,

00:32:59 socio political interactions,

00:33:01 you’re thinking about where there’s people,

00:33:03 there’s collections and groups of people,

00:33:05 they have goals, goals largely built around survival

00:33:08 and quality of life.

00:33:10 There are fundamental economics around scarcity of resources.

00:33:16 And when humans come and start interpreting

00:33:19 a situation like that, because you brought up

00:33:21 like historical events,

00:33:23 they start interpreting situations like that.

00:33:25 They apply a lot of this fundamental framework

00:33:29 for interpreting that.

00:33:30 Well, who are the people?

00:33:32 What were their goals?

00:33:33 What resources did they have?

00:33:35 How much power influence did they have over the other?

00:33:37 Like this fundamental substrate, if you will,

00:33:40 for interpreting and reasoning about that.

00:33:43 So I think it is possible to imbue a computer

00:33:46 with that stuff that humans like take for granted

00:33:50 when they go and sit down and try to interpret things.

00:33:54 And then with that foundation, they acquire,

00:33:58 they start acquiring the details,

00:34:00 the specifics in a given situation,

00:34:02 are then able to interpret it with regard to that framework.

00:34:05 And then given that interpretation, they can do what?

00:34:08 They can predict.

00:34:10 But not only can they predict,

00:34:12 they can predict now with an explanation

00:34:15 that can be given in those terms,

00:34:17 in the terms of that underlying framework

00:34:20 that most humans share.

00:34:22 Now you could find humans that come and interpret events

00:34:24 very differently than other humans

00:34:26 because they’re like using a different framework.

00:34:30 The movie Matrix comes to mind

00:34:32 where they decided humans were really just batteries,

00:34:36 and that’s how they interpreted the value of humans

00:34:39 as a source of electrical energy.

00:34:41 So, but I think that for the most part,

00:34:45 we have a way of interpreting the events

00:34:50 or the social events around us

00:34:52 because we have this shared framework.

00:34:54 It comes from, again, the fact that we’re similar beings

00:34:58 that have similar goals, similar emotions,

00:35:01 and we can make sense out of these.

00:35:02 These frameworks make sense to us.

00:35:05 So how much knowledge is there, do you think?

00:35:08 So you said it’s possible.

00:35:09 Well, there’s a tremendous amount of detailed knowledge

00:35:12 in the world.

00:35:12 You could imagine effectively infinite number

00:35:17 of unique situations and unique configurations

00:35:20 of these things.

00:35:22 But the knowledge that you need,

00:35:25 what I refer to as like the frameworks,

00:35:27 for you need for interpreting them, I don’t think.

00:35:29 I think those are finite.

00:35:31 You think the frameworks are more important

00:35:35 than the bulk of the knowledge?

00:35:36 So it’s like framing.

00:35:37 Yeah, because what the frameworks do

00:35:39 is they give you now the ability to interpret and reason,

00:35:41 and to interpret and reason,

00:35:43 to interpret and reason over the specifics

00:35:46 in ways that other humans would understand.

00:35:49 What about the specifics?

00:35:51 You know, you acquire the specifics by reading

00:35:53 and by talking to other people.

00:35:55 So I’m mostly actually just even,

00:35:57 if we can focus on even the beginning,

00:36:00 the common sense stuff,

00:36:01 the stuff that doesn’t even require reading,

00:36:03 or it almost requires playing around with the world

00:36:06 or something, just being able to sort of manipulate objects,

00:36:10 drink water and so on, all of that.

00:36:13 Every time we try to do that kind of thing

00:36:16 in robotics or AI, it seems to be like an onion.

00:36:21 You seem to realize how much knowledge

00:36:23 is really required to perform

00:36:24 even some of these basic tasks.

00:36:27 Do you have that sense as well?

00:36:30 And if so, how do we get all those details?

00:36:33 Are they written down somewhere?

00:36:35 Do they have to be learned through experience?

00:36:39 So I think when, like, if you’re talking about

00:36:41 sort of the physics, the basic physics around us,

00:36:44 for example, acquiring information about,

00:36:46 acquiring how that works.

00:36:49 Yeah, I mean, I think there’s a combination of things going,

00:36:52 I think there’s a combination of things going on.

00:36:54 I think there is like fundamental pattern matching,

00:36:57 like what we were talking about before,

00:36:59 where you see enough examples,

00:37:01 enough data about something and you start assuming that.

00:37:03 And with similar input,

00:37:05 I’m gonna predict similar outputs.

00:37:07 You can’t necessarily explain it at all.

00:37:10 You may learn very quickly that when you let something go,

00:37:14 it falls to the ground.

00:37:16 But you can’t necessarily explain that.

00:37:19 But that’s such a deep idea,

00:37:22 that if you let something go, like the idea of gravity.

00:37:26 I mean, people are letting things go

00:37:27 and counting on them falling

00:37:29 well before they understood gravity.

00:37:30 But that seems to be, that’s exactly what I mean,

00:37:33 is before you take a physics class

00:37:36 or study anything about Newton,

00:37:39 just the idea that stuff falls to the ground

00:37:42 and then you’d be able to generalize

00:37:45 that all kinds of stuff falls to the ground.

00:37:49 It just seems like a non, without encoding it,

00:37:53 like hard coding it in,

00:37:55 it seems like a difficult thing to pick up.

00:37:57 It seems like you have to have a lot of different knowledge

00:38:01 to be able to integrate that into the framework,

00:38:05 sort of into everything else.

00:38:07 So both know that stuff falls to the ground

00:38:10 and start to reason about sociopolitical discourse.

00:38:16 So both, like the very basic

00:38:18 and the high level reasoning decision making.

00:38:22 I guess my question is, how hard is this problem?

00:38:26 And sorry to linger on it because again,

00:38:29 and we’ll get to it for sure,

00:38:31 as what Watson with Jeopardy did is take on a problem

00:38:34 that’s much more constrained

00:38:35 but has the same hugeness of scale,

00:38:38 at least from the outsider’s perspective.

00:38:40 So I’m asking the general life question

00:38:42 of to be able to be an intelligent being

00:38:45 and reason in the world about both gravity and politics,

00:38:50 how hard is that problem?

00:38:53 So I think it’s solvable.

00:38:59 Okay, now beautiful.

00:39:00 So what about time travel?

00:39:04 Okay, I’m just saying the same answer.

00:39:08 Not as convinced.

00:39:09 Not as convinced yet, okay.

00:39:11 No, I think it is solvable.

00:39:14 I mean, I think that it’s a learn,

00:39:16 first of all, it’s about getting machines to learn.

00:39:18 Learning is fundamental.

00:39:21 And I think we’re already in a place that we understand,

00:39:24 for example, how machines can learn in various ways.

00:39:28 Right now, our learning stuff is sort of primitive

00:39:32 in that we haven’t sort of taught machines

00:39:38 to learn the frameworks.

00:39:39 We don’t communicate our frameworks

00:39:41 because of how shared they are, in some cases we do,

00:39:42 but we don’t annotate, if you will,

00:39:46 all the data in the world with the frameworks

00:39:48 that are inherent or underlying our understanding.

00:39:53 Instead, we just operate with the data.

00:39:56 So if we wanna be able to reason over the data

00:39:59 in similar terms in the common frameworks,

00:40:02 we need to be able to teach the computer,

00:40:03 or at least we need to program the computer

00:40:06 to acquire, to have access to and acquire,

00:40:10 learn the frameworks as well

00:40:12 and connect the frameworks to the data.

00:40:15 I think this can be done.

00:40:18 I think we can start, I think machine learning,

00:40:22 for example, with enough examples,

00:40:26 can start to learn these basic dynamics.

00:40:28 Will they relate them necessarily to the gravity?

00:40:32 Not unless they can also acquire those theories as well

00:40:38 and put the experiential knowledge

00:40:40 and connect it back to the theoretical knowledge.

00:40:43 I think if we think in terms of these class of architectures

00:40:47 that are designed to both learn the specifics,

00:40:51 find the patterns, but also acquire the frameworks

00:40:54 and connect the data to the frameworks.

00:40:56 If we think in terms of robust architectures like this,

00:40:59 I think there is a path toward getting there.

00:41:03 In terms of encoding architectures like that,

00:41:06 do you think systems that are able to do this

00:41:10 will look like neural networks or representing,

00:41:14 if you look back to the 80s and 90s with the expert systems,

00:41:18 they’re more like graphs, systems that are based in logic,

00:41:24 able to contain a large amount of knowledge

00:41:26 where the challenge was the automated acquisition

00:41:28 of that knowledge.

00:41:29 I guess the question is when you collect both the frameworks

00:41:33 and the knowledge from the data,

00:41:35 what do you think that thing will look like?

00:41:37 Yeah, so I mean, I think asking the question,

00:41:39 they look like neural networks is a bit of a red herring.

00:41:41 I mean, I think that they will certainly do inductive

00:41:45 or pattern match based reasoning.

00:41:46 And I’ve already experimented with architectures

00:41:49 that combine both that use machine learning

00:41:52 and neural networks to learn certain classes of knowledge,

00:41:55 in other words, to find repeated patterns

00:41:57 in order for it to make good inductive guesses,

00:42:01 but then ultimately to try to take those learnings

00:42:05 and marry them, in other words, connect them to frameworks

00:42:09 so that it can then reason over that

00:42:11 in terms other humans understand.

00:42:13 So for example, at elemental cognition, we do both.

00:42:16 We have architectures that do both, both those things,

00:42:19 but also have a learning method

00:42:21 for acquiring the frameworks themselves and saying,

00:42:24 look, ultimately, I need to take this data.

00:42:27 I need to interpret it in the form of these frameworks

00:42:30 so they can reason over it.

00:42:30 So there is a fundamental knowledge representation,

00:42:33 like what you’re saying,

00:42:34 like these graphs of logic, if you will.

00:42:36 There are also neural networks

00:42:39 that acquire a certain class of information.

00:42:43 Then they then align them with these frameworks,

00:42:45 but there’s also a mechanism

00:42:47 to acquire the frameworks themselves.

00:42:49 Yeah, so it seems like the idea of frameworks

00:42:52 requires some kind of collaboration with humans.

00:42:55 Absolutely.

00:42:56 So do you think of that collaboration as direct?

00:42:59 Well, and let’s be clear.

00:43:01 Only for the express purpose that you’re designing,

00:43:06 you’re designing an intelligence

00:43:09 that can ultimately communicate with humans

00:43:12 in the terms of frameworks that help them understand things.

00:43:17 So to be really clear,

00:43:19 you can independently create a machine learning system,

00:43:24 an intelligence that I might call an alien intelligence

00:43:28 that does a better job than you with some things,

00:43:31 but can’t explain the framework to you.

00:43:33 That doesn’t mean it might be better than you at the thing.

00:43:36 It might be that you cannot comprehend the framework

00:43:39 that it may have created for itself that is inexplicable

00:43:42 to you.

00:43:43 That’s a reality.

00:43:45 But you’re more interested in a case where you can.

00:43:48 I am, yeah.

00:43:51 My sort of approach to AI is because

00:43:54 I’ve set the goal for myself.

00:43:55 I want machines to be able to ultimately communicate,

00:44:00 understanding with humans.

00:44:01 I want them to be able to acquire and communicate,

00:44:03 acquire knowledge from humans

00:44:04 and communicate knowledge to humans.

00:44:06 They should be using what inductive

00:44:11 machine learning techniques are good at,

00:44:13 which is to observe patterns of data,

00:44:16 whether it be in language or whether it be in images

00:44:19 or videos or whatever,

00:44:23 to acquire these patterns,

00:44:25 to induce the generalizations from those patterns,

00:44:29 but then ultimately to work with humans

00:44:31 to connect them to frameworks, interpretations, if you will,

00:44:34 that ultimately make sense to humans.

00:44:36 Of course, the machine is gonna have the strength

00:44:38 that it has, the richer, longer memory,

00:44:41 but it has the more rigorous reasoning abilities,

00:44:45 the deeper reasoning abilities,

00:44:47 so it’ll be an interesting complementary relationship

00:44:51 between the human and the machine.

00:44:53 Do you think that ultimately needs explainability

00:44:55 like a machine?

00:44:55 So if we look, we study, for example,

00:44:57 Tesla autopilot a lot, where humans,

00:45:00 I don’t know if you’ve driven the vehicle,

00:45:02 are aware of what it is.

00:45:04 So you’re basically the human and machine

00:45:09 are working together there,

00:45:10 and the human is responsible for their own life

00:45:12 to monitor the system,

00:45:14 and the system fails every few miles,

00:45:18 and so there’s hundreds,

00:45:20 there’s millions of those failures a day,

00:45:23 and so that’s like a moment of interaction.

00:45:25 Do you see?

00:45:26 Yeah, that’s exactly right.

00:45:27 That’s a moment of interaction

00:45:29 where the machine has learned some stuff,

00:45:34 it has a failure, somehow the failure’s communicated,

00:45:38 the human is now filling in the mistake, if you will,

00:45:41 or maybe correcting or doing something

00:45:43 that is more successful in that case,

00:45:45 the computer takes that learning.

00:45:47 So I believe that the collaboration

00:45:50 between human and machine,

00:45:52 I mean, that’s sort of a primitive example

00:45:53 and sort of a more,

00:45:56 another example is where the machine’s literally talking

00:45:59 to you and saying, look, I’m reading this thing.

00:46:02 I know that the next word might be this or that,

00:46:06 but I don’t really understand why.

00:46:08 I have my guess.

00:46:09 Can you help me understand the framework that supports this

00:46:14 and then can kind of acquire that,

00:46:16 take that and reason about it and reuse it

00:46:18 the next time it’s reading to try to understand something,

00:46:20 not unlike a human student might do.

00:46:24 I mean, I remember when my daughter was in first grade

00:46:27 and she had a reading assignment about electricity

00:46:32 and somewhere in the text it says,

00:46:35 and electricity is produced by water flowing over turbines

00:46:38 or something like that.

00:46:39 And then there’s a question that says,

00:46:41 well, how is electricity created?

00:46:43 And so my daughter comes to me and says,

00:46:45 I mean, I could, you know,

00:46:46 created and produced are kind of synonyms in this case.

00:46:49 So I can go back to the text

00:46:50 and I can copy by water flowing over turbines,

00:46:53 but I have no idea what that means.

00:46:56 Like I don’t know how to interpret

00:46:57 water flowing over turbines and what electricity even is.

00:47:00 I mean, I can get the answer right by matching the text,

00:47:04 but I don’t have any framework for understanding

00:47:06 what this means at all.

00:47:07 And framework really is, I mean, it’s a set of,

00:47:10 not to be mathematical, but axioms of ideas

00:47:14 that you bring to the table and interpreting stuff

00:47:16 and then you build those up somehow.

00:47:18 You build them up with the expectation

00:47:20 that there’s a shared understanding of what they are.

00:47:23 Sure, yeah, it’s the social, that us humans,

00:47:28 do you have a sense that humans on earth in general

00:47:32 share a set of, like how many frameworks are there?

00:47:36 I mean, it depends on how you bound them, right?

00:47:38 So in other words, how big or small,

00:47:39 like their individual scope,

00:47:42 but there’s lots and there are new ones.

00:47:44 I think the way I think about it is kind of in a layer.

00:47:47 I think that the architectures are being layered in that.

00:47:50 There’s a small set of primitives.

00:47:53 They allow you the foundation to build frameworks.

00:47:56 And then there may be many frameworks,

00:47:58 but you have the ability to acquire them.

00:48:00 And then you have the ability to reuse them.

00:48:03 I mean, one of the most compelling ways

00:48:04 of thinking about this is a reasoning by analogy,

00:48:07 where I can say, oh, wow,

00:48:08 I’ve learned something very similar.

00:48:11 I never heard of this game soccer,

00:48:15 but if it’s like basketball in the sense

00:48:17 that the goal’s like the hoop

00:48:19 and I have to get the ball in the hoop

00:48:20 and I have guards and I have this and I have that,

00:48:23 like where are the similarities

00:48:26 and where are the differences?

00:48:27 And I have a foundation now

00:48:29 for interpreting this new information.

00:48:31 And then the different groups,

00:48:33 like the millennials will have a framework.

00:48:36 And then, you know, the Democrats and Republicans.

00:48:41 Millennials, nobody wants that framework.

00:48:43 Well, I mean, I think, right,

00:48:45 I mean, you’re talking about political and social ways

00:48:48 of interpreting the world around them.

00:48:49 And I think these frameworks are still largely,

00:48:51 largely similar.

00:48:52 I think they differ in maybe

00:48:54 what some fundamental assumptions and values are.

00:48:57 Now, from a reasoning perspective,

00:48:59 like the ability to process the framework,

00:49:01 it might not be that different.

00:49:04 The implications of different fundamental values

00:49:06 or fundamental assumptions in those frameworks

00:49:09 may reach very different conclusions.

00:49:12 So from a social perspective,

00:49:14 the conclusions may be very different.

00:49:16 From an intelligence perspective,

00:49:18 I just followed where my assumptions took me.

00:49:21 Yeah, the process itself will look similar.

00:49:23 But that’s a fascinating idea

00:49:25 that frameworks really help carve

00:49:30 how a statement will be interpreted.

00:49:33 I mean, having a Democrat and a Republican framework

00:49:40 and then read the exact same statement

00:49:42 and the conclusions that you derive

00:49:44 will be totally different

00:49:45 from an AI perspective is fascinating.

00:49:47 What we would want out of the AI

00:49:49 is to be able to tell you

00:49:51 that this perspective, one perspective,

00:49:53 one set of assumptions is gonna lead you here,

00:49:55 another set of assumptions is gonna lead you there.

00:49:58 And in fact, to help people reason and say,

00:50:01 oh, I see where our differences lie.

00:50:05 I have this fundamental belief about that.

00:50:06 I have this fundamental belief about that.

00:50:09 Yeah, that’s quite brilliant.

00:50:10 From my perspective, NLP,

00:50:12 there’s this idea that there’s one way

00:50:14 to really understand a statement,

00:50:16 but that probably isn’t.

00:50:18 There’s probably an infinite number of ways

00:50:20 to understand a statement, depending on the question.

00:50:21 There’s lots of different interpretations,

00:50:23 and the broader the content, the richer it is.

00:50:31 And so you and I can have very different experiences

00:50:35 with the same text, obviously.

00:50:37 And if we’re committed to understanding each other,

00:50:42 we start, and that’s the other important point,

00:50:45 if we’re committed to understanding each other,

00:50:47 we start decomposing and breaking down our interpretation

00:50:51 to its more and more primitive components

00:50:54 until we get to that point where we say,

00:50:55 oh, I see why we disagree.

00:50:58 And we try to understand how fundamental

00:51:00 that disagreement really is.

00:51:02 But that requires a commitment

00:51:04 to breaking down that interpretation

00:51:06 in terms of that framework in a logical way.

00:51:08 Otherwise, and this is why I think of AI

00:51:12 as really complimenting and helping human intelligence

00:51:16 to overcome some of its biases and its predisposition

00:51:19 to be persuaded by more shallow reasoning

00:51:25 in the sense that we get over this idea,

00:51:26 well, I’m right because I’m Republican,

00:51:29 or I’m right because I’m Democratic,

00:51:31 and someone labeled this as Democratic point of view,

00:51:33 or it has the following keywords in it.

00:51:35 And if the machine can help us break that argument down

00:51:38 and say, wait a second, what do you really think

00:51:41 about this, right?

00:51:42 So essentially holding us accountable

00:51:45 to doing more critical thinking.

00:51:47 We’re gonna have to sit and think about this fast.

00:51:49 That’s, I love that.

00:51:50 I think that’s really empowering use of AI

00:51:53 for the public discourse is completely disintegrating

00:51:57 currently as we learn how to do it on social media.

00:52:00 That’s right.

00:52:02 So one of the greatest accomplishments

00:52:05 in the history of AI is Watson competing

00:52:12 in the game of Jeopardy against humans.

00:52:14 And you were a lead in that, a critical part of that.

00:52:18 Let’s start at the very basics.

00:52:20 What is the game of Jeopardy?

00:52:22 The game for us humans, human versus human.

00:52:25 Right, so it’s to take a question and answer it.

00:52:33 The game of Jeopardy.

00:52:34 It’s just the opposite.

00:52:35 Actually, well, no, but it’s not, right?

00:52:38 It’s really not.

00:52:39 It’s really to get a question and answer,

00:52:41 but it’s what we call a factoid question.

00:52:43 So this notion of like, it really relates to some fact

00:52:46 that two people would argue

00:52:49 whether the facts are true or not.

00:52:50 In fact, most people wouldn’t.

00:52:51 Jeopardy kind of counts on the idea

00:52:53 that these statements have factual answers.

00:52:57 And the idea is to, first of all,

00:53:02 determine whether or not you know the answer,

00:53:03 which is sort of an interesting twist.

00:53:06 So first of all, understand the question.

00:53:07 You have to understand the question.

00:53:08 What is it asking?

00:53:09 And that’s a good point

00:53:10 because the questions are not asked directly, right?

00:53:14 They’re all like,

00:53:15 the way the questions are asked is nonlinear.

00:53:18 It’s like, it’s a little bit witty.

00:53:20 It’s a little bit playful sometimes.

00:53:22 It’s a little bit tricky.

00:53:25 Yeah, they’re asked in exactly numerous witty, tricky ways.

00:53:30 Exactly what they’re asking is not obvious.

00:53:32 It takes inexperienced humans a while

00:53:35 to go, what is it even asking?

00:53:36 And it’s sort of an interesting realization that you have

00:53:39 when somebody says, oh, what’s,

00:53:40 Jeopardy is a question answering show.

00:53:42 And then he’s like, oh, like, I know a lot.

00:53:43 And then you read it and you’re still trying

00:53:45 to process the question and the champions have answered

00:53:48 and moved on.

00:53:49 There are three questions ahead

00:53:51 by the time you figured out what the question even meant.

00:53:54 So there’s definitely an ability there

00:53:56 to just parse out what the question even is.

00:53:59 So that was certainly challenging.

00:54:00 It’s interesting historically though,

00:54:02 if you look back at the Jeopardy games much earlier,

00:54:05 you know, early games. Like 60s, 70s, that kind of thing.

00:54:08 The questions were much more direct.

00:54:10 They weren’t quite like that.

00:54:11 They got sort of more and more interesting,

00:54:13 the way they asked them that sort of got more

00:54:15 and more interesting and subtle and nuanced

00:54:18 and humorous and witty over time,

00:54:20 which really required the human

00:54:22 to kind of make the right connections

00:54:24 in figuring out what the question was even asking.

00:54:26 So yeah, you have to figure out the questions even asking.

00:54:29 Then you have to determine whether

00:54:31 or not you think you know the answer.

00:54:34 And because you have to buzz in really quickly,

00:54:37 you sort of have to make that determination

00:54:39 as quickly as you possibly can.

00:54:41 Otherwise you lose the opportunity to buzz in.

00:54:43 You mean…

00:54:44 Even before you really know if you know the answer.

00:54:46 I think a lot of humans will assume,

00:54:48 they’ll process it very superficially.

00:54:53 In other words, what’s the topic?

00:54:54 What are some keywords?

00:54:55 And just say, do I know this area or not

00:54:58 before they actually know the answer?

00:55:00 Then they’ll buzz in and think about it.

00:55:03 So it’s interesting what humans do.

00:55:04 Now, some people who know all things,

00:55:06 like Ken Jennings or something,

00:55:08 or the more recent big Jeopardy player,

00:55:11 I mean, they’ll just buzz in.

00:55:12 They’ll just assume they know all of Jeopardy

00:55:14 and they’ll just buzz in.

00:55:15 Watson, interestingly, didn’t even come close

00:55:18 to knowing all of Jeopardy, right?

00:55:20 Watson really…

00:55:20 Even at the peak, even at its best.

00:55:22 Yeah, so for example, I mean,

00:55:24 we had this thing called recall,

00:55:25 which is like how many of all the Jeopardy questions,

00:55:29 how many could we even find the right answer for anywhere?

00:55:34 Like, can we come up with, we had a big body of knowledge,

00:55:38 something in the order of several terabytes.

00:55:39 I mean, from a web scale, it was actually very small,

00:55:42 but from like a book scale,

00:55:44 we’re talking about millions of books, right?

00:55:46 So the equivalent of millions of books,

00:55:48 encyclopedias, dictionaries, books,

00:55:50 it’s still a ton of information.

00:55:52 And I think it was only 85% was the answer

00:55:55 anywhere to be found.

00:55:57 So you’re already down at that level

00:56:00 just to get started, right?

00:56:02 So, and so it was important to get a very quick sense

00:56:07 of do you think you know the right answer to this question?

00:56:10 So we had to compute that confidence

00:56:12 as quickly as we possibly could.

00:56:14 So in effect, we had to answer it

00:56:16 and at least spend some time essentially answering it

00:56:22 and then judging the confidence that our answer was right

00:56:26 and then deciding whether or not

00:56:28 we were confident enough to buzz in.

00:56:30 And that would depend on what else was going on in the game.

00:56:31 Because there was a risk.

00:56:33 So like if you’re really in a situation

00:56:35 where I have to take a guess, I have very little to lose,

00:56:38 then you’ll buzz in with less confidence.

00:56:40 So that was accounted for the financial standings

00:56:42 of the different competitors.

00:56:44 Correct.

00:56:45 How much of the game was left?

00:56:46 How much time was left?

00:56:48 Where you were in the standing, things like that.

00:56:50 How many hundreds of milliseconds

00:56:52 that we’re talking about here?

00:56:53 Do you have a sense of what is?

00:56:55 We targeted, yeah, we targeted.

00:56:58 So, I mean, we targeted answering

00:57:01 in under three seconds and.

00:57:04 Buzzing in.

00:57:05 So the decision to buzz in and then the actual answering

00:57:09 are those two different stages?

00:57:10 Yeah, they were two different things.

00:57:12 In fact, we had multiple stages,

00:57:14 whereas like we would say, let’s estimate our confidence,

00:57:17 which was sort of a shallow answering process.

00:57:21 And then ultimately decide to buzz in

00:57:23 and then we may take another second or something

00:57:27 to kind of go in there and do that.

00:57:30 But by and large, we were saying like,

00:57:32 we can’t play the game.

00:57:33 We can’t even compete if we can’t on average

00:57:37 answer these questions in around three seconds or less.

00:57:40 So you stepped in.

00:57:41 So there’s these three humans playing a game

00:57:45 and you stepped in with the idea that IBM Watson

00:57:47 would be one of, replace one of the humans

00:57:49 and compete against two.

00:57:52 Can you tell the story of Watson taking on this game?

00:57:56 Sure.

00:57:57 It seems exceptionally difficult.

00:57:58 Yeah, so the story was that it was coming up,

00:58:03 I think to the 10 year anniversary of Big Blue,

00:58:06 not Big Blue, Deep Blue.

00:58:08 IBM wanted to do sort of another kind of really

00:58:11 fun challenge, public challenge that can bring attention

00:58:15 to IBM research and the kind of the cool stuff

00:58:17 that we were doing.

00:58:19 I had been working in AI at IBM for some time.

00:58:23 I had a team doing what’s called

00:58:26 open domain factoid question answering,

00:58:28 which is, we’re not gonna tell you what the questions are.

00:58:31 We’re not even gonna tell you what they’re about.

00:58:33 Can you go off and get accurate answers to these questions?

00:58:36 And it was an area of AI research that I was involved in.

00:58:41 And so it was a very specific passion of mine.

00:58:44 Language understanding had always been a passion of mine.

00:58:47 One sort of narrow slice on whether or not

00:58:49 you could do anything with language

00:58:51 was this notion of open domain and meaning

00:58:52 I could ask anything about anything.

00:58:54 Factoid meaning it essentially had an answer

00:58:57 and being able to do that accurately and quickly.

00:59:00 So that was a research area

00:59:02 that my team had already been in.

00:59:03 And so completely independently,

00:59:06 several IBM executives, like what are we gonna do?

00:59:09 What’s the next cool thing to do?

00:59:11 And Ken Jennings was on his winning streak.

00:59:13 This was like, whatever it was, 2004, I think,

00:59:16 was on his winning streak.

00:59:18 And someone thought, hey, that would be really cool

00:59:20 if the computer can play Jeopardy.

00:59:23 And so this was like in 2004,

00:59:25 they were shopping this thing around

00:59:28 and everyone was telling the research execs, no way.

00:59:33 Like, this is crazy.

00:59:35 And we had some pretty senior people in the field

00:59:37 and they’re saying, no, this is crazy.

00:59:38 And it would come across my desk and I was like,

00:59:40 but that’s kind of what I’m really interested in doing.

00:59:44 But there was such this prevailing sense of this is nuts.

00:59:47 We’re not gonna risk IBM’s reputation on this.

00:59:49 We’re just not doing it.

00:59:50 And this happened in 2004, it happened in 2005.

00:59:53 At the end of 2006, it was coming around again.

00:59:59 And I was coming off of a,

01:00:01 I was doing the open domain question answering stuff,

01:00:03 but I was coming off a couple other projects.

01:00:05 I had a lot more time to put into this.

01:00:08 And I argued that it could be done.

01:00:10 And I argue it would be crazy not to do this.

01:00:12 Can I, you can be honest at this point.

01:00:15 So even though you argued for it,

01:00:17 what’s the confidence that you had yourself privately

01:00:21 that this could be done?

01:00:22 Was, we just told the story,

01:00:25 how you tell stories to convince others.

01:00:27 How confident were you?

01:00:28 What was your estimation of the problem at that time?

01:00:32 So I thought it was possible.

01:00:34 And a lot of people thought it was impossible.

01:00:36 I thought it was possible.

01:00:37 The reason why I thought it was possible

01:00:39 was because I did some brief experimentation.

01:00:41 I knew a lot about how we were approaching

01:00:43 open domain factoid question answering.

01:00:45 I’ve been doing it for some years.

01:00:47 I looked at the Jeopardy stuff.

01:00:49 I said, this is gonna be hard

01:00:50 for a lot of the points that we mentioned earlier.

01:00:54 Hard to interpret the question.

01:00:57 Hard to do it quickly enough.

01:00:58 Hard to compute an accurate confidence.

01:01:00 None of this stuff had been done well enough before.

01:01:03 But a lot of the technologies we’re building

01:01:04 were the kinds of technologies that should work.

01:01:07 But more to the point, what was driving me was,

01:01:10 I was in IBM research.

01:01:12 I was a senior leader in IBM research.

01:01:14 And this is the kind of stuff we were supposed to do.

01:01:17 In other words, we were basically supposed to.

01:01:18 This is the moonshot.

01:01:19 This is the.

01:01:20 We were supposed to take things and say,

01:01:21 this is an active research area.

01:01:24 It’s our obligation to kind of,

01:01:27 if we have the opportunity, to push it to the limits.

01:01:30 And if it doesn’t work,

01:01:31 to understand more deeply why we can’t do it.

01:01:34 And so I was very committed to that notion saying,

01:01:37 folks, this is what we do.

01:01:40 It’s crazy not to do this.

01:01:42 This is an active research area.

01:01:43 We’ve been in this for years.

01:01:44 Why wouldn’t we take this grand challenge

01:01:47 and push it as hard as we can?

01:01:50 At the very least, we’d be able to come out and say,

01:01:53 here’s why this problem is way hard.

01:01:57 Here’s what we tried and here’s how we failed.

01:01:58 So I was very driven as a scientist from that perspective.

01:02:03 And then I also argued,

01:02:06 based on what we did a feasibility study,

01:02:08 why I thought it was hard but possible.

01:02:10 And I showed examples of where it succeeded,

01:02:14 where it failed, why it failed,

01:02:16 and sort of a high level architecture approach

01:02:18 for why we should do it.

01:02:19 But for the most part, at that point,

01:02:22 the execs really were just looking for someone crazy enough

01:02:24 to say yes, because for several years at that point,

01:02:27 everyone had said, no, I’m not willing to risk my reputation

01:02:32 and my career on this thing.

01:02:34 Clearly you did not have such fears.

01:02:36 Okay. I did not.

01:02:37 So you dived right in.

01:02:39 And yet, for what I understand,

01:02:42 it was performing very poorly in the beginning.

01:02:46 So what were the initial approaches and why did they fail?

01:02:51 Well, there were lots of hard aspects to it.

01:02:54 I mean, one of the reasons why prior approaches

01:02:57 that we had worked on in the past failed

01:03:02 was because the questions were difficult to interpret.

01:03:07 Like, what are you even asking for, right?

01:03:10 Very often, like if the question was very direct,

01:03:12 like what city, or what, even then it could be tricky,

01:03:16 but what city or what person,

01:03:21 is often when it would name it very clearly,

01:03:24 you would know that.

01:03:25 And if there were just a small set of them,

01:03:28 in other words, we’re gonna ask about these five types.

01:03:31 Like, it’s gonna be an answer,

01:03:33 and the answer will be a city in this state

01:03:36 or a city in this country.

01:03:37 The answer will be a person of this type, right?

01:03:41 Like an actor or whatever it is.

01:03:42 But it turns out that in Jeopardy,

01:03:44 there were like tens of thousands of these things.

01:03:47 And it was a very, very long tail,

01:03:50 meaning that it just went on and on.

01:03:52 And so even if you focused on trying to encode the types

01:03:56 at the very top, like there’s five that were the most,

01:03:59 let’s say five of the most frequent,

01:04:01 you still cover a very small percentage of the data.

01:04:04 So you couldn’t take that approach of saying,

01:04:07 I’m just going to try to collect facts

01:04:09 about these five or 10 types or 20 types

01:04:12 or 50 types or whatever.

01:04:14 So that was like one of the first things,

01:04:16 like what do you do about that?

01:04:18 And so we came up with an approach toward that.

01:04:21 And the approach looked promising,

01:04:23 and we continued to improve our ability

01:04:25 to handle that problem throughout the project.

01:04:29 The other issue was that right from the outside,

01:04:32 I said, we’re not going to,

01:04:34 I committed to doing this in three to five years.

01:04:37 So we did it in four.

01:04:39 So I got lucky.

01:04:40 But one of the things that that,

01:04:42 putting that like stake in the ground was,

01:04:45 and I knew how hard the language understanding problem was.

01:04:47 I said, we’re not going to actually understand language

01:04:51 to solve this problem.

01:04:53 We are not going to interpret the question

01:04:57 and the domain of knowledge that the question refers to

01:05:00 and reason over that to answer these questions.

01:05:02 Obviously we’re not going to be doing that.

01:05:04 At the same time,

01:05:05 simple search wasn’t good enough to confidently answer

01:05:10 with a single correct answer.

01:05:13 First of all, that’s like brilliant.

01:05:14 That’s such a great mix of innovation

01:05:16 and practical engineering three, four, eight.

01:05:18 So you’re not trying to solve the general NLU problem.

01:05:21 You’re saying, let’s solve this in any way possible.

01:05:25 Oh, yeah.

01:05:26 No, I was committed to saying, look,

01:05:28 we’re going to solving the open domain

01:05:29 question answering problem.

01:05:31 We’re using Jeopardy as a driver for that.

01:05:33 That’s a big benchmark.

01:05:34 Good enough, big benchmark, exactly.

01:05:36 And now we’re.

01:05:38 How do we do it?

01:05:39 We could just like, whatever,

01:05:39 like just figure out what works

01:05:41 because I want to be able to go back

01:05:42 to the academic science community

01:05:44 and say, here’s what we tried.

01:05:45 Here’s what worked.

01:05:46 Here’s what didn’t work.

01:05:47 Great.

01:05:48 I don’t want to go in and say,

01:05:50 oh, I only have one technology.

01:05:51 I have a hammer.

01:05:52 I’m only going to use this.

01:05:53 I’m going to do whatever it takes.

01:05:54 I’m like, I’m going to think out of the box

01:05:56 and do whatever it takes.

01:05:57 One, and I also, there was another thing I believed.

01:06:00 I believed that the fundamental NLP technologies

01:06:04 and machine learning technologies would be adequate.

01:06:08 And this was an issue of how do we enhance them?

01:06:11 How do we integrate them?

01:06:13 How do we advance them?

01:06:15 So I had one researcher who came to me

01:06:17 who had been working on question answering

01:06:18 with me for a very long time,

01:06:21 who had said, we’re going to need Maxwell’s equations

01:06:24 for question answering.

01:06:25 And I said, if we need some fundamental formula

01:06:28 that breaks new ground in how we understand language,

01:06:31 we’re screwed.

01:06:33 We’re not going to get there from here.

01:06:34 Like I am not counting.

01:06:38 My assumption is I’m not counting

01:06:39 on some brand new invention.

01:06:42 What I’m counting on is the ability

01:06:45 to take everything it has done before

01:06:48 to figure out an architecture on how to integrate it well

01:06:51 and then see where it breaks

01:06:54 and make the necessary advances we need to make

01:06:57 until this thing works.

01:06:58 Push it hard to see where it breaks

01:07:00 and then patch it up.

01:07:01 I mean, that’s how people change the world.

01:07:03 I mean, that’s the Elon Musk approach to the rockets,

01:07:05 SpaceX, that’s the Henry Ford and so on.

01:07:08 I love it.

01:07:09 And I happen to be, in this case, I happen to be right,

01:07:11 but like we didn’t know.

01:07:14 But you kind of have to put a stake in the rest

01:07:15 of how you’re going to run the project.

01:07:17 So yeah, and backtracking to search.

01:07:20 So if you were to do, what’s the brute force solution?

01:07:24 What would you search over?

01:07:26 So you have a question,

01:07:27 how would you search the possible space of answers?

01:07:31 Look, web search has come a long way even since then.

01:07:34 But at the time, first of all,

01:07:37 I mean, there were a couple of other constraints

01:07:39 around the problem, which is interesting.

01:07:40 So you couldn’t go out to the web.

01:07:43 You couldn’t search the internet.

01:07:44 In other words, the AI experiment was,

01:07:47 we want a self contained device.

01:07:50 If the device is as big as a room, fine,

01:07:52 it’s as big as a room,

01:07:53 but we want a self contained device.

01:07:57 You’re not going out to the internet.

01:07:59 You don’t have a lifeline to anything.

01:08:01 So it had to kind of fit in a shoe box, if you will,

01:08:04 or at least a size of a few refrigerators,

01:08:06 whatever it might be.

01:08:08 See, but also you couldn’t just get out there.

01:08:10 You couldn’t go off network, right, to kind of go.

01:08:13 So there was that limitation.

01:08:14 But then we did, but the basic thing was go do web search.

01:08:19 Problem was, even when we went and did a web search,

01:08:22 I don’t remember exactly the numbers,

01:08:24 but somewhere in the order of 65% of the time,

01:08:27 the answer would be somewhere, you know,

01:08:30 in the top 10 or 20 documents.

01:08:32 So first of all, that’s not even good enough to play Jeopardy.

01:08:36 You know, the words, even if you could pull the,

01:08:38 even if you could perfectly pull the answer

01:08:40 out of the top 20 documents, top 10 documents,

01:08:42 whatever it was, which we didn’t know how to do.

01:08:45 But even if you could do that, you’d be,

01:08:47 and you knew it was right,

01:08:49 unless you had enough confidence in it, right?

01:08:50 So you’d have to pull out the right answer.

01:08:52 You’d have to have confidence it was the right answer.

01:08:54 And then you’d have to do that fast enough to now go buzz in

01:08:58 and you’d still only get 65% of them right,

01:09:00 which doesn’t even put you in the winner’s circle.

01:09:02 Winner’s circle, you have to be up over 70

01:09:05 and you have to do it really quick

01:09:06 and you have to do it really quickly.

01:09:08 But now the problem is, well,

01:09:10 even if I had somewhere in the top 10 documents,

01:09:12 how do I figure out where in the top 10 documents

01:09:14 that answer is and how do I compute a confidence

01:09:18 of all the possible candidates?

01:09:19 So it’s not like I go in knowing the right answer

01:09:21 and I have to pick it.

01:09:22 I don’t know the right answer.

01:09:23 I have a bunch of documents,

01:09:25 somewhere in there is the right answer.

01:09:27 How do I, as a machine, go out

01:09:28 and figure out which one’s right?

01:09:30 And then how do I score it?

01:09:32 So, and now how do I deal with the fact

01:09:35 that I can’t actually go out to the web?

01:09:37 First of all, if you pause on that, just think about it.

01:09:40 If you could go to the web,

01:09:42 do you think that problem is solvable

01:09:44 if you just pause on it?

01:09:45 Just thinking even beyond jeopardy,

01:09:49 do you think the problem of reading text

01:09:51 defined where the answer is?

01:09:53 Well, we solved that in some definition of solves

01:09:56 given the jeopardy challenge.

01:09:58 How did you do it for jeopardy?

01:09:59 So how do you take a body of work in a particular topic

01:10:03 and extract the key pieces of information?

01:10:05 So now forgetting about the huge volumes

01:10:09 that are on the web, right?

01:10:10 So now we have to figure out,

01:10:11 we did a lot of source research.

01:10:12 In other words, what body of knowledge

01:10:15 is gonna be small enough,

01:10:17 but broad enough to answer jeopardy?

01:10:19 And we ultimately did find the body of knowledge

01:10:21 that did that.

01:10:22 I mean, it included Wikipedia and a bunch of other stuff.

01:10:25 So like encyclopedia type of stuff.

01:10:26 I don’t know if you can speak to it.

01:10:27 Encyclopedias, dictionaries,

01:10:28 different types of semantic resources,

01:10:31 like WordNet and other types of semantic resources like that,

01:10:33 as well as like some web crawls.

01:10:36 In other words, where we went out and took that content

01:10:39 and then expanded it based on producing,

01:10:41 statistically producing seeds,

01:10:44 using those seeds for other searches and then expanding that.

01:10:48 So using these like expansion techniques,

01:10:51 we went out and had found enough content

01:10:53 and we’re like, okay, this is good.

01:10:54 And even up until the end,

01:10:56 we had a thread of research.

01:10:58 It was always trying to figure out

01:10:59 what content could we efficiently include.

01:11:02 I mean, there’s a lot of popular,

01:11:03 like what is the church lady?

01:11:05 Well, I think was one of the, like what,

01:11:09 where do you, I guess that’s probably an encyclopedia, so.

01:11:12 So that was an encyclopedia,

01:11:13 but then we would take that stuff

01:11:16 and we would go out and we would expand.

01:11:17 In other words, we’d go find other content

01:11:20 that wasn’t in the core resources and expand it.

01:11:23 The amount of content, we grew it by an order of magnitude,

01:11:26 but still, again, from a web scale perspective,

01:11:28 this is very small amount of content.

01:11:30 It’s very select.

01:11:31 We then took all that content,

01:11:33 we preanalyzed the crap out of it,

01:11:35 meaning we parsed it,

01:11:38 broke it down into all those individual words

01:11:40 and then we did semantic,

01:11:42 syntactic and semantic parses on it,

01:11:44 had computer algorithms that annotated it

01:11:46 and we indexed that in a very rich and very fast index.

01:11:53 So we have a relatively huge amount of,

01:11:55 let’s say the equivalent of, for the sake of argument,

01:11:57 two to 5 million bucks.

01:11:58 We’ve now analyzed all that, blowing up its size even more

01:12:01 because now we have all this metadata

01:12:03 and then we richly indexed all of that

01:12:05 and by the way, in a giant in memory cache.

01:12:08 So Watson did not go to disk.

01:12:11 So the infrastructure component there,

01:12:13 if you could just speak to it, how tough it,

01:12:15 I mean, I know 2000, maybe this is 2008, nine,

01:12:22 that’s kind of a long time ago.

01:12:25 How hard is it to use multiple machines?

01:12:28 How hard is the infrastructure component,

01:12:29 the hardware component?

01:12:31 So we used IBM hardware.

01:12:33 We had something like, I forgot exactly,

01:12:36 but close to 3000 cores completely connected.

01:12:40 So you had a switch where every CPU

01:12:42 was connected to every other CPU.

01:12:43 And they were sharing memory in some kind of way.

01:12:46 Large shared memory, right?

01:12:47 And all this data was preanalyzed

01:12:50 and put into a very fast indexing structure

01:12:54 that was all in memory.

01:12:58 And then we took that question,

01:13:02 we would analyze the question.

01:13:04 So all the content was now preanalyzed.

01:13:07 So if I went and tried to find a piece of content,

01:13:10 it would come back with all the metadata

01:13:12 that we had precomputed.

01:13:14 How do you shove that question?

01:13:16 How do you connect the big knowledge base

01:13:20 with the metadata and that’s indexed

01:13:22 to the simple little witty confusing question?

01:13:26 Right.

01:13:27 So therein lies the Watson architecture, right?

01:13:31 So we would take the question,

01:13:32 we would analyze the question.

01:13:34 So which means that we would parse it

01:13:37 and interpret it a bunch of different ways.

01:13:38 We’d try to figure out what is it asking about?

01:13:40 So we had multiple strategies

01:13:44 to kind of determine what was it asking for.

01:13:47 That might be represented as a simple string,

01:13:49 a character string,

01:13:51 or something we would connect back

01:13:53 to different semantic types

01:13:54 that were from existing resources.

01:13:56 So anyway, the bottom line is

01:13:57 we would do a bunch of analysis in the question.

01:14:00 And question analysis had to finish and had to finish fast.

01:14:04 So we do the question analysis

01:14:05 because then from the question analysis,

01:14:07 we would now produce searches.

01:14:09 So we would, and we had built

01:14:12 using open source search engines,

01:14:14 we modified them,

01:14:16 but we had a number of different search engines

01:14:17 we would use that had different characteristics.

01:14:20 We went in there and engineered

01:14:22 and modified those search engines,

01:14:24 ultimately to now take our question analysis,

01:14:28 produce multiple queries

01:14:29 based on different interpretations of the question

01:14:33 and fire out a whole bunch of searches in parallel.

01:14:36 And they would come back with passages.

01:14:39 So these are passive search algorithms.

01:14:42 They would come back with passages.

01:14:43 And so now let’s say you had a thousand passages.

01:14:47 Now for each passage, you parallelize again.

01:14:50 So you went out and you parallelize the search.

01:14:55 Each search would now come back

01:14:56 with a whole bunch of passages.

01:14:58 Maybe you had a total of a thousand

01:15:00 or 5,000 whatever passages.

01:15:02 For each passage now,

01:15:03 you’d go and figure out whether or not

01:15:05 there was a candidate,

01:15:06 we’d call it candidate answer in there.

01:15:08 So you had a whole bunch of other algorithms

01:15:11 that would find candidate answers,

01:15:13 possible answers to the question.

01:15:15 And so you had candidate answer,

01:15:17 called candidate answer generators,

01:15:19 a whole bunch of those.

01:15:20 So for every one of these components,

01:15:23 the team was constantly doing research coming up,

01:15:25 better ways to generate search queries from the questions,

01:15:28 better ways to analyze the question,

01:15:29 better ways to generate candidates.

01:15:31 And speed, so better is accuracy and speed.

01:15:35 Correct, so right, speed and accuracy

01:15:38 for the most part were separated.

01:15:40 We handle that sort of in separate ways.

01:15:42 Like I focus purely on accuracy, end to end accuracy.

01:15:45 Are we ultimately getting more questions

01:15:46 and producing more accurate confidences?

01:15:48 And then a whole nother team

01:15:50 that was constantly analyzing the workflow

01:15:52 to find the bottlenecks.

01:15:53 And then figuring out how to both parallelize

01:15:55 and drive the algorithm speed.

01:15:58 But anyway, so now think of it like,

01:15:59 you have this big fan out now, right?

01:16:01 Because you had multiple queries,

01:16:03 now you have thousands of candidate answers.

01:16:06 For each candidate answer, you’re gonna score it.

01:16:09 So you’re gonna use all the data that built up.

01:16:12 You’re gonna use the question analysis,

01:16:15 you’re gonna use how the query was generated,

01:16:17 you’re gonna use the passage itself,

01:16:19 and you’re gonna use the candidate answer

01:16:21 that was generated, and you’re gonna score that.

01:16:25 So now we have a group of researchers

01:16:28 coming up with scores.

01:16:30 There are hundreds of different scores.

01:16:32 So now you’re getting a fan out of it again

01:16:34 from however many candidate answers you have

01:16:37 to all the different scores.

01:16:39 So if you have 200 different scores

01:16:41 and you have a thousand candidates,

01:16:42 now you have 200,000 scores.

01:16:45 And so now you gotta figure out,

01:16:48 how do I now rank these answers

01:16:52 based on the scores that came back?

01:16:54 And I wanna rank them based on the likelihood

01:16:56 that they’re a correct answer to the question.

01:16:58 So every scorer was its own research project.

01:17:01 What do you mean by scorer?

01:17:02 So is that the annotation process

01:17:04 of basically a human being saying that this answer

01:17:07 has a quality of?

01:17:09 Think of it, if you wanna think of it,

01:17:10 what you’re doing, you know,

01:17:12 if you wanna think about what a human would be doing,

01:17:14 human would be looking at a possible answer,

01:17:17 they’d be reading the, you know, Emily Dickinson,

01:17:20 they’d be reading the passage in which that occurred,

01:17:23 they’d be looking at the question,

01:17:25 and they’d be making a decision of how likely it is

01:17:28 that Emily Dickinson, given this evidence in this passage,

01:17:32 is the right answer to that question.

01:17:33 Got it.

01:17:34 So that’s the annotation task.

01:17:36 That’s the annotation process.

01:17:37 That’s the scoring task.

01:17:38 But scoring implies zero to one kind of continuous.

01:17:41 That’s right.

01:17:42 You give it a zero to one score.

01:17:42 So it’s not a binary.

01:17:44 No, you give it a score.

01:17:46 Give it a zero to, yeah, exactly, zero to one score.

01:17:48 But humans give different scores,

01:17:50 so you have to somehow normalize and all that kind of stuff

01:17:52 that deal with all that complexity.

01:17:54 It depends on what your strategy is.

01:17:55 We both, we…

01:17:57 It could be relative, too.

01:17:58 It could be…

01:17:59 We actually looked at the raw scores

01:18:01 as well as standardized scores,

01:18:02 because humans are not involved in this.

01:18:04 Humans are not involved.

01:18:05 Sorry, so I’m misunderstanding the process here.

01:18:08 This is passages.

01:18:10 Where is the ground truth coming from?

01:18:13 Ground truth is only the answers to the questions.

01:18:16 So it’s end to end.

01:18:17 It’s end to end.

01:18:19 So I was always driving end to end performance.

01:18:22 It’s a very interesting, a very interesting

01:18:25 engineering approach,

01:18:27 and ultimately scientific research approach,

01:18:30 always driving end to end.

01:18:31 Now, that’s not to say

01:18:34 we wouldn’t make hypotheses

01:18:38 that individual component performance

01:18:42 was related in some way to end to end performance.

01:18:44 Of course we would,

01:18:45 because people would have to build individual components.

01:18:48 But ultimately, to get your component integrated

01:18:51 to the system, you have to show impact

01:18:53 on end to end performance, question answering performance.

01:18:55 So there’s many very smart people working on this,

01:18:58 and they’re basically trying to sell their ideas

01:19:01 as a component that should be part of the system.

01:19:03 That’s right.

01:19:04 And they would do research on their component,

01:19:07 and they would say things like,

01:19:09 I’m gonna improve this as a candidate generator,

01:19:13 or I’m gonna improve this as a question score,

01:19:15 or as a passive scorer,

01:19:17 I’m gonna improve this, or as a parser,

01:19:20 and I can improve it by 2% on its component metric,

01:19:25 like a better parse, or a better candidate,

01:19:27 or a better type estimation, whatever it is.

01:19:30 And then I would say,

01:19:31 I need to understand how the improvement

01:19:33 on that component metric

01:19:35 is gonna affect the end to end performance.

01:19:37 If you can’t estimate that,

01:19:39 and can’t do experiments to demonstrate that,

01:19:41 it doesn’t get in.

01:19:43 That’s like the best run AI project I’ve ever heard.

01:19:47 That’s awesome.

01:19:48 Okay, what breakthrough would you say,

01:19:51 like, I’m sure there’s a lot of day to day breakthroughs,

01:19:54 but was there like a breakthrough

01:19:55 that really helped improve performance?

01:19:57 Like where people began to believe,

01:20:01 or is it just a gradual process?

01:20:02 Well, I think it was a gradual process,

01:20:04 but one of the things that I think gave people confidence

01:20:08 that we can get there was that,

01:20:11 as we follow this procedure of different ideas,

01:20:16 build different components,

01:20:19 plug them into the architecture,

01:20:20 run the system, see how we do,

01:20:23 do the error analysis,

01:20:24 start off new research projects to improve things.

01:20:28 And the very important idea

01:20:31 that the individual component work

01:20:37 did not have to deeply understand everything

01:20:40 that was going on with every other component.

01:20:42 And this is where we leverage machine learning

01:20:45 in a very important way.

01:20:47 So while individual components

01:20:48 could be statistically driven machine learning components,

01:20:51 some of them were heuristic,

01:20:52 some of them were machine learning components,

01:20:54 the system has a whole combined all the scores

01:20:58 using machine learning.

01:21:00 This was critical because that way

01:21:02 you can divide and conquer.

01:21:04 So you can say, okay, you work on your candidate generator,

01:21:07 or you work on this approach to answer scoring,

01:21:09 you work on this approach to type scoring,

01:21:11 you work on this approach to passage search

01:21:14 or to pass a selection and so forth.

01:21:17 But when we just plug it in,

01:21:19 and we had enough training data to say,

01:21:22 now we can train and figure out

01:21:24 how do we weigh all the scores relative to each other

01:21:29 based on the predicting the outcome,

01:21:31 which is right or wrong on Jeopardy.

01:21:33 And we had enough training data to do that.

01:21:36 So this enabled people to work independently

01:21:40 and to let the machine learning do the integration.

01:21:43 Beautiful, so yeah, the machine learning

01:21:45 is doing the fusion,

01:21:46 and then it’s a human orchestrated ensemble

01:21:48 of different approaches.

01:21:50 That’s great.

01:21:53 Still impressive that you were able

01:21:54 to get it done in a few years.

01:21:57 That’s not obvious to me that it’s doable,

01:22:00 if I just put myself in that mindset.

01:22:03 But when you look back at the Jeopardy challenge,

01:22:07 again, when you’re looking up at the stars,

01:22:10 what are you most proud of, looking back at those days?

01:22:17 I’m most proud of my,

01:22:27 my commitment and my team’s commitment

01:22:32 to be true to the science,

01:22:35 to not be afraid to fail.

01:22:38 That’s beautiful because there’s so much pressure,

01:22:41 because it is a public event, it is a public show,

01:22:44 that you were dedicated to the idea.

01:22:46 That’s right.

01:22:50 Do you think it was a success?

01:22:53 In the eyes of the world, it was a success.

01:22:56 By your, I’m sure, exceptionally high standards,

01:23:00 is there something you regret you would do differently?

01:23:03 It was a success.

01:23:05 It was a success for our goal.

01:23:08 Our goal was to build the most advanced

01:23:11 open domain question answering system.

01:23:14 We went back to the old problems

01:23:16 that we used to try to solve,

01:23:17 and we did dramatically better on all of them,

01:23:21 as well as we beat Jeopardy.

01:23:24 So we won at Jeopardy.

01:23:25 So it was a success.

01:23:28 It was, I worry that the community

01:23:32 or the world would not understand it as a success

01:23:36 because it came down to only one game.

01:23:38 And I knew statistically speaking,

01:23:40 this can be a huge technical success,

01:23:42 and we could still lose that one game.

01:23:43 And that’s a whole nother theme of this, of the journey.

01:23:47 But it was a success.

01:23:50 It was not a success in natural language understanding,

01:23:53 but that was not the goal.

01:23:56 Yeah, that was, but I would argue,

01:24:00 I understand what you’re saying

01:24:02 in terms of the science,

01:24:04 but I would argue that the inspiration of it, right?

01:24:07 The, not a success in terms of solving

01:24:11 natural language understanding.

01:24:12 There was a success of being an inspiration

01:24:16 to future challenges.

01:24:17 Absolutely.

01:24:18 That drive future efforts.

01:24:21 What’s the difference between how human being

01:24:23 compete in Jeopardy and how Watson does it?

01:24:26 That’s important in terms of intelligence.

01:24:28 Yeah, so that actually came up very early on

01:24:31 in the project also.

01:24:32 In fact, I had people who wanted to be on the project

01:24:35 who were early on, who sort of approached me

01:24:39 once I committed to do it,

01:24:42 had wanted to think about how humans do it.

01:24:44 And they were, from a cognition perspective,

01:24:47 like human cognition and how that should play.

01:24:49 And I would not take them on the project

01:24:52 because another assumption or another stake

01:24:55 I put in the ground was,

01:24:57 I don’t really care how humans do this.

01:25:00 At least in the context of this project.

01:25:01 I need to build in the context of this project.

01:25:03 In NLU and in building an AI that understands

01:25:06 how it needs to ultimately communicate with humans,

01:25:09 I very much care.

01:25:11 So it wasn’t that I didn’t care in general.

01:25:16 In fact, as an AI scientist, I care a lot about that,

01:25:20 but I’m also a practical engineer

01:25:22 and I committed to getting this thing done

01:25:25 and I wasn’t gonna get distracted.

01:25:27 I had to kind of say, like, if I’m gonna get this done,

01:25:30 I’m gonna chart this path.

01:25:31 And this path says, we’re gonna engineer a machine

01:25:35 that’s gonna get this thing done.

01:25:37 And we know what search and NLP can do.

01:25:41 We have to build on that foundation.

01:25:44 If I come in and take a different approach

01:25:46 and start wondering about how the human mind

01:25:48 might or might not do this,

01:25:49 I’m not gonna get there from here in the timeframe.

01:25:54 I think that’s a great way to lead the team.

01:25:56 But now that it’s done and there’s one,

01:25:59 when you look back, analyze what’s the difference actually.

01:26:03 So I was a little bit surprised actually

01:26:05 to discover over time, as this would come up

01:26:09 from time to time and we’d reflect on it,

01:26:13 and talking to Ken Jennings a little bit

01:26:14 and hearing Ken Jennings talk about

01:26:16 how he answered questions,

01:26:18 that it might’ve been closer to the way humans

01:26:21 answer questions than I might’ve imagined previously.

01:26:24 Because humans are probably in the game of Jeopardy!

01:26:27 at the level of Ken Jennings,

01:26:29 are probably also cheating their way to winning, right?

01:26:35 Not cheating, but shallow.

01:26:36 Well, they’re doing shallow analysis.

01:26:37 They’re doing the fastest possible.

01:26:39 They’re doing shallow analysis.

01:26:40 So they are very quickly analyzing the question

01:26:44 and coming up with some key vectors or cues, if you will.

01:26:49 And they’re taking those cues

01:26:51 and they’re very quickly going through

01:26:52 like their library of stuff,

01:26:54 not deeply reasoning about what’s going on.

01:26:57 And then sort of like a lots of different,

01:27:00 like what we would call these scores,

01:27:03 would kind of score that in a very shallow way

01:27:06 and then say, oh, boom, you know, that’s what it is.

01:27:08 And so it’s interesting as we reflected on that.

01:27:12 So we may be doing something that’s not too far off

01:27:16 from the way humans do it,

01:27:17 but we certainly didn’t approach it by saying,

01:27:21 how would a human do this?

01:27:22 Now in elemental cognition,

01:27:24 like the project I’m leading now,

01:27:27 we ask those questions all the time

01:27:28 because ultimately we’re trying to do something

01:27:31 that is to make the intelligence of the machine

01:27:35 and the intelligence of the human very compatible.

01:27:37 Well, compatible in the sense

01:27:38 they can communicate with one another

01:27:40 and they can reason with this shared understanding.

01:27:44 So how they think about things and how they build answers,

01:27:48 how they build explanations

01:27:49 becomes a very important question to consider.

01:27:52 So what’s the difference between this open domain,

01:27:56 but cold constructed question answering of Jeopardy

01:28:02 and more something that requires understanding

01:28:07 for shared communication with humans and machines?

01:28:10 Yeah, well, this goes back to the interpretation

01:28:13 of what we were talking about before.

01:28:15 Jeopardy, the system’s not trying to interpret the question

01:28:19 and it’s not interpreting the content it’s reusing

01:28:22 with regard to any particular framework.

01:28:23 I mean, it is parsing it and parsing the content

01:28:26 and using grammatical cues and stuff like that.

01:28:29 So if you think of grammar as a human framework

01:28:31 in some sense, it has that,

01:28:33 but when you get into the richer semantic frameworks,

01:28:36 what do people, how do they think, what motivates them,

01:28:40 what are the events that are occurring

01:28:41 and why are they occurring

01:28:42 and what causes what else to happen

01:28:44 and where are things in time and space?

01:28:47 And like when you start thinking about how humans formulate

01:28:51 and structure the knowledge that they acquire in their head

01:28:54 and wasn’t doing any of that.

01:28:57 What do you think are the essential challenges

01:29:01 of like free flowing communication, free flowing dialogue

01:29:05 versus question answering even with the framework

01:29:09 of the interpretation dialogue?

01:29:11 Yep.

01:29:12 Do you see free flowing dialogue

01:29:14 as a fundamentally more difficult than question answering

01:29:20 even with shared interpretation?

01:29:23 So dialogue is important in a number of different ways.

01:29:26 I mean, it’s a challenge.

01:29:27 So first of all, when I think about the machine that,

01:29:30 when I think about a machine that understands language

01:29:33 and ultimately can reason in an objective way

01:29:36 that can take the information that it perceives

01:29:40 through language or other means

01:29:42 and connect it back to these frameworks,

01:29:44 reason and explain itself,

01:29:48 that system ultimately needs to be able to talk to humans

01:29:50 or it needs to be able to interact with humans.

01:29:52 So in some sense it needs to dialogue.

01:29:55 That doesn’t mean that it,

01:29:58 sometimes people talk about dialogue and they think,

01:30:01 you know, how do humans talk to like,

01:30:04 talk to each other in a casual conversation

01:30:07 and you can mimic casual conversations.

01:30:11 We’re not trying to mimic casual conversations.

01:30:14 We’re really trying to produce a machine

01:30:17 whose goal is to help you think

01:30:20 and help you reason about your answers and explain why.

01:30:23 So instead of like talking to your friend down the street

01:30:26 about having a small talk conversation

01:30:28 with your friend down the street,

01:30:30 this is more about like you would be communicating

01:30:32 to the computer on Star Trek

01:30:34 where like, what do you wanna think about?

01:30:36 Like, what do you wanna reason about?

01:30:37 I’m gonna tell you the information I have.

01:30:38 I’m gonna have to summarize it.

01:30:39 I’m gonna ask you questions.

01:30:41 You’re gonna answer those questions.

01:30:42 I’m gonna go back and forth with you.

01:30:44 I’m gonna figure out what your mental model is.

01:30:46 I’m gonna now relate that to the information I have

01:30:49 and present it to you in a way that you can understand it

01:30:53 and then we could ask followup questions.

01:30:54 So it’s that type of dialogue that you wanna construct.

01:30:58 It’s more structured, it’s more goal oriented,

01:31:02 but it needs to be fluid.

01:31:04 In other words, it has to be engaging and fluid.

01:31:09 It has to be productive and not distracting.

01:31:13 So there has to be a model of,

01:31:15 in other words, the machine has to have a model

01:31:17 of how humans think through things and discuss them.

01:31:22 So basically a productive, rich conversation

01:31:28 unlike this podcast.

01:31:32 I’d like to think it’s more similar to this podcast.

01:31:34 I wasn’t joking.

01:31:37 I’ll ask you about humor as well, actually.

01:31:39 But what’s the hardest part of that?

01:31:43 Because it seems we’re quite far away

01:31:46 as a community from that still to be able to,

01:31:49 so one is having a shared understanding.

01:31:53 That’s, I think, a lot of the stuff you said

01:31:54 with frameworks is quite brilliant.

01:31:57 But just creating a smooth discourse.

01:32:02 It feels clunky right now.

01:32:05 Which aspects of this whole problem

01:32:07 that you just specified of having

01:32:10 a productive conversation is the hardest?

01:32:12 And that we’re, or maybe any aspect of it

01:32:17 you can comment on because it’s so shrouded in mystery.

01:32:20 So I think to do this you kind of have to be creative

01:32:24 in the following sense.

01:32:26 If I were to do this as purely a machine learning approach

01:32:29 and someone said learn how to have a good,

01:32:32 fluent, structured knowledge acquisition conversation,

01:32:38 I’d go out and say, okay, I have to collect

01:32:39 a bunch of data of people doing that.

01:32:42 People reasoning well, having a good, structured

01:32:47 conversation that both acquires knowledge efficiently

01:32:50 as well as produces answers and explanations

01:32:52 as part of the process.

01:32:54 And you struggle.

01:32:57 I don’t know.

01:32:58 To collect the data.

01:32:59 To collect the data because I don’t know

01:33:00 how much data is like that.

01:33:02 Okay, there’s one, there’s a humorous commentary

01:33:06 on the lack of rational discourse.

01:33:08 But also even if it’s out there, say it was out there,

01:33:12 how do you actually annotate, like how do you collect

01:33:16 an accessible example?

01:33:17 Right, so I think any problem like this

01:33:19 where you don’t have enough data to represent

01:33:23 the phenomenon you want to learn,

01:33:24 in other words you want, if you have enough data

01:33:26 you could potentially learn the pattern.

01:33:28 In an example like this it’s hard to do.

01:33:30 This is sort of a human sort of thing to do.

01:33:34 What recently came out at IBM was the debater projects

01:33:37 and it’s interesting, right, because now you do have

01:33:39 these structured dialogues, these debate things

01:33:42 where they did use machine learning techniques

01:33:44 to generate these debates.

01:33:49 Dialogues are a little bit tougher in my opinion

01:33:52 than generating a structured argument

01:33:56 where you have lots of other structured arguments

01:33:57 like this, you could potentially annotate that data

01:33:59 and you could say this is a good response,

01:34:00 this is a bad response in a particular domain.

01:34:03 Here I have to be responsive and I have to be opportunistic

01:34:08 with regard to what is the human saying.

01:34:11 So I’m goal oriented in saying I want to solve the problem,

01:34:14 I want to acquire the knowledge necessary,

01:34:16 but I also have to be opportunistic and responsive

01:34:19 to what the human is saying.

01:34:21 So I think that it’s not clear that we could just train

01:34:24 on the body of data to do this, but we could bootstrap it.

01:34:28 In other words, we can be creative and we could say,

01:34:30 what do we think the structure of a good dialogue is

01:34:34 that does this well?

01:34:35 And we can start to create that.

01:34:37 If we can create that more programmatically,

01:34:42 at least to get this process started

01:34:44 and I can create a tool that now engages humans effectively,

01:34:47 I could start generating data,

01:34:51 I could start the human learning process

01:34:53 and I can update my machine,

01:34:55 but I could also start the automatic learning process

01:34:57 as well, but I have to understand

01:34:59 what features to even learn over.

01:35:01 So I have to bootstrap the process a little bit first.

01:35:04 And that’s a creative design task

01:35:07 that I could then use as input

01:35:11 into a more automatic learning task.

01:35:13 So some creativity in bootstrapping.

01:35:16 What elements of a conversation

01:35:18 do you think you would like to see?

01:35:21 So one of the benchmarks for me is humor, right?

01:35:25 That seems to be one of the hardest.

01:35:27 And to me, the biggest contrast is sort of Watson.

01:35:31 So one of the greatest sketches,

01:35:33 comedy sketches of all time, right,

01:35:35 is the SNL celebrity Jeopardy

01:35:38 with Alex Trebek and Sean Connery

01:35:42 and Burt Reynolds and so on,

01:35:44 with Sean Connery commentating on Alex Trebek’s

01:35:47 while they’re alive.

01:35:49 And I think all of them are in the negative pointwise.

01:35:52 So they’re clearly all losing

01:35:55 in terms of the game of Jeopardy,

01:35:56 but they’re winning in terms of comedy.

01:35:58 So what do you think about humor in this whole interaction

01:36:03 in the dialogue that’s productive?

01:36:06 Or even just what humor represents to me

01:36:09 is the same idea that you’re saying about framework,

01:36:15 because humor only exists

01:36:16 within a particular human framework.

01:36:18 So what do you think about humor?

01:36:19 What do you think about things like humor

01:36:21 that connect to the kind of creativity

01:36:23 you mentioned that’s needed?

01:36:25 I think there’s a couple of things going on there.

01:36:26 So I sort of feel like,

01:36:29 and I might be too optimistic this way,

01:36:31 but I think that there are,

01:36:34 we did a little bit about with puns in Jeopardy.

01:36:39 We literally sat down and said,

01:36:41 how do puns work?

01:36:43 And it’s like wordplay,

01:36:44 and you could formalize these things.

01:36:46 So I think there’s a lot aspects of humor

01:36:48 that you could formalize.

01:36:50 You could also learn humor.

01:36:51 You could just say, what do people laugh at?

01:36:53 And if you have enough, again,

01:36:54 if you have enough data to represent the phenomenon,

01:36:56 you might be able to weigh the features

01:36:59 and figure out what humans find funny

01:37:01 and what they don’t find funny.

01:37:02 The machine might not be able to explain

01:37:05 why the human is funny unless we sit back

01:37:08 and think about that more formally.

01:37:10 I think, again, I think you do a combination of both.

01:37:12 And I’m always a big proponent of that.

01:37:13 I think robust architectures and approaches

01:37:16 are always a little bit combination of us reflecting

01:37:19 and being creative about how things are structured,

01:37:22 how to formalize them,

01:37:23 and then taking advantage of large data and doing learning

01:37:26 and figuring out how to combine these two approaches.

01:37:29 I think there’s another aspect to humor though,

01:37:31 which goes to the idea that I feel like I can relate

01:37:34 to the person telling the story.

01:37:38 And I think that’s an interesting theme

01:37:42 in the whole AI theme,

01:37:43 which is, do I feel differently when I know it’s a robot?

01:37:48 And when I imagine that the robot is not conscious

01:37:52 the way I’m conscious,

01:37:54 when I imagine the robot does not actually

01:37:56 have the experiences that I experience,

01:37:58 do I find it funny?

01:38:00 Or do, because it’s not as related,

01:38:03 I don’t imagine that the person’s relating it to it

01:38:06 the way I relate to it.

01:38:07 I think this also, you see this in the arts

01:38:11 and in entertainment where,

01:38:14 sometimes you have savants who are remarkable at a thing,

01:38:17 whether it’s sculpture or it’s music or whatever,

01:38:19 but the people who get the most attention

01:38:21 are the people who can evoke a similar emotional response,

01:38:26 who can get you to emote, right?

01:38:30 About the way they are.

01:38:31 In other words, who can basically make the connection

01:38:34 from the artifact, from the music or the painting

01:38:37 of the sculpture to the emotion

01:38:39 and get you to share that emotion with them.

01:38:42 And then, and that’s when it becomes compelling.

01:38:44 So they’re communicating at a whole different level.

01:38:46 They’re just not communicating the artifact.

01:38:49 They’re communicating their emotional response

01:38:50 to the artifact.

01:38:51 And then you feel like, oh wow,

01:38:53 I can relate to that person, I can connect to that,

01:38:55 I can connect to that person.

01:38:57 So I think humor has that aspect as well.

01:39:00 So the idea that you can connect to that person,

01:39:04 person being the critical thing,

01:39:07 but we’re also able to anthropomorphize objects pretty,

01:39:12 robots and AI systems pretty well.

01:39:15 So we’re almost looking to make them human.

01:39:18 So maybe from your experience with Watson,

01:39:20 maybe you can comment on, did you consider that as part,

01:39:24 well, obviously the problem of jeopardy

01:39:27 doesn’t require anthropomorphization, but nevertheless.

01:39:30 Well, there was some interest in doing that.

01:39:32 And that’s another thing I didn’t want to do

01:39:35 because I didn’t want to distract

01:39:36 from the actual scientific task.

01:39:38 But you’re absolutely right.

01:39:39 I mean, humans do anthropomorphize

01:39:43 and without necessarily a lot of work.

01:39:45 I mean, you just put some eyes

01:39:47 and a couple of eyebrow movements

01:39:49 and you’re getting humans to react emotionally.

01:39:51 And I think you can do that.

01:39:53 So I didn’t mean to suggest that,

01:39:56 that that connection cannot be mimicked.

01:40:00 I think that connection can be mimicked

01:40:02 and can produce that emotional response.

01:40:07 I just wonder though, if you’re told what’s really going on,

01:40:13 if you know that the machine is not conscious,

01:40:17 not having the same richness of emotional reactions

01:40:20 and understanding that it doesn’t really

01:40:21 share the understanding,

01:40:23 but it’s essentially just moving its eyebrow

01:40:25 or drooping its eyes or making them bigger,

01:40:27 whatever it’s doing, just getting the emotional response,

01:40:30 will you still feel it?

01:40:31 Interesting.

01:40:32 I think you probably would for a while.

01:40:34 And then when it becomes more important

01:40:35 that there’s a deeper share of understanding,

01:40:38 it may run flat, but I don’t know.

01:40:40 I’m pretty confident that majority of the world,

01:40:45 even if you tell them how it works,

01:40:47 well, it will not matter,

01:40:49 especially if the machine herself says that she is conscious.

01:40:55 That’s very possible.

01:40:56 So you, the scientist that made the machine is saying

01:41:00 that this is how the algorithm works.

01:41:02 Everybody will just assume you’re lying

01:41:04 and that there’s a conscious being there.

01:41:06 So you’re deep into the science fiction genre now,

01:41:09 but yeah.

01:41:10 I don’t think it’s, it’s actually psychology.

01:41:12 I think it’s not science fiction.

01:41:13 I think it’s reality.

01:41:14 I think it’s a really powerful one

01:41:16 that we’ll have to be exploring in the next few decades.

01:41:19 I agree.

01:41:20 It’s a very interesting element of intelligence.

01:41:23 So what do you think,

01:41:25 we’ve talked about social constructs of intelligences

01:41:28 and frameworks and the way humans

01:41:31 kind of interpret information.

01:41:33 What do you think is a good test of intelligence

01:41:35 in your view?

01:41:36 So there’s the Alan Turing with the Turing test.

01:41:41 Watson accomplished something very impressive with Jeopardy.

01:41:44 What do you think is a test

01:41:47 that would impress the heck out of you

01:41:49 that you saw that a computer could do?

01:41:52 They would say, this is crossing a kind of threshold

01:41:57 that gives me pause in a good way.

01:42:02 My expectations for AI are generally high.

01:42:06 What does high look like by the way?

01:42:07 So not the threshold, test is a threshold.

01:42:10 What do you think is the destination?

01:42:12 What do you think is the ceiling?

01:42:15 I think machines will in many measures

01:42:18 will be better than us, will become more effective.

01:42:21 In other words, better predictors about a lot of things

01:42:25 than ultimately we can do.

01:42:28 I think where they’re gonna struggle

01:42:30 is what we talked about before,

01:42:32 which is relating to communicating with

01:42:36 and understanding humans in deeper ways.

01:42:40 And so I think that’s a key point,

01:42:42 like we can create the super parrot.

01:42:44 What I mean by the super parrot is given enough data,

01:42:47 a machine can mimic your emotional response,

01:42:50 can even generate language that will sound smart

01:42:52 and what someone else might say under similar circumstances.

01:42:57 Like I would just pause on that,

01:42:58 like that’s the super parrot, right?

01:43:01 So given similar circumstances,

01:43:03 moves its faces in similar ways,

01:43:06 changes its tone of voice in similar ways,

01:43:09 produces strings of language that would similar

01:43:12 that a human might say,

01:43:14 not necessarily being able to produce

01:43:16 a logical interpretation or understanding

01:43:20 that would ultimately satisfy a critical interrogation

01:43:25 or a critical understanding.

01:43:27 I think you just described me in a nutshell.

01:43:30 So I think philosophically speaking,

01:43:34 you could argue that that’s all we’re doing

01:43:36 as human beings to work super parrots.

01:43:37 So I was gonna say, it’s very possible,

01:43:40 you know, humans do behave that way too.

01:43:42 And so upon deeper probing and deeper interrogation,

01:43:45 you may find out that there isn’t a shared understanding

01:43:48 because I think humans do both.

01:43:50 Like humans are statistical language model machines

01:43:54 and they are capable reasoners.

01:43:57 You know, they’re both.

01:43:59 And you don’t know which is going on, right?

01:44:02 So, and I think it’s an interesting problem.

01:44:09 We talked earlier about like where we are

01:44:11 in our social and political landscape.

01:44:14 Can you distinguish someone who can string words together

01:44:19 and sound like they know what they’re talking about

01:44:21 from someone who actually does?

01:44:24 Can you do that without dialogue,

01:44:25 without interrogative or probing dialogue?

01:44:27 So it’s interesting because humans are really good

01:44:31 in their own mind, justifying or explaining what they hear

01:44:34 because they project their understanding onto yours.

01:44:38 So you could say, you could put together a string of words

01:44:41 and someone will sit there and interpret it

01:44:44 in a way that’s extremely biased

01:44:46 to the way they wanna interpret it.

01:44:47 They wanna assume that you’re an idiot

01:44:48 and they’ll interpret it one way.

01:44:50 They will assume you’re a genius

01:44:51 and they’ll interpret it another way that suits their needs.

01:44:54 So this is tricky business.

01:44:56 So I think to answer your question,

01:44:59 as AI gets better and better, better and better mimic,

01:45:02 you recreate the super parrots,

01:45:03 we’re challenged just as we are with,

01:45:06 we’re challenged with humans.

01:45:08 Do you really know what you’re talking about?

01:45:10 Do you have a meaningful interpretation,

01:45:14 a powerful framework that you could reason over

01:45:17 and justify your answers, justify your predictions

01:45:23 and your beliefs, why you think they make sense.

01:45:25 Can you convince me what the implications are?

01:45:28 So can you reason intelligently and make me believe

01:45:34 that the implications of your prediction and so forth?

01:45:40 So what happens is it becomes reflective.

01:45:44 My standard for judging your intelligence

01:45:46 depends a lot on mine.

01:45:49 But you’re saying there should be a large group of people

01:45:54 with a certain standard of intelligence

01:45:56 that would be convinced by this particular AI system.

01:46:02 Then they’ll pass.

01:46:03 There should be, but I think depending on the content,

01:46:07 one of the problems we have there

01:46:09 is that if that large community of people

01:46:12 are not judging it with regard to a rigorous standard

01:46:16 of objective logic and reason, you still have a problem.

01:46:19 Like masses of people can be persuaded.

01:46:23 The millennials, yeah.

01:46:25 To turn their brains off.

01:46:29 Right, okay.

01:46:31 Sorry.

01:46:32 By the way, I have nothing against the millennials.

01:46:33 No, I don’t, I’m just, just.

01:46:36 So you’re a part of one of the great benchmarks,

01:46:40 challenges of AI history.

01:46:43 What do you think about AlphaZero, OpenAI5,

01:46:47 AlphaStar accomplishments on video games recently,

01:46:50 which are also, I think, at least in the case of Go,

01:46:55 with AlphaGo and AlphaZero playing Go,

01:46:57 was a monumental accomplishment as well.

01:46:59 What are your thoughts about that challenge?

01:47:01 I think it was a giant landmark for AI.

01:47:03 I think it was phenomenal.

01:47:04 I mean, it was one of those other things

01:47:06 nobody thought like solving Go was gonna be easy,

01:47:08 particularly because it’s hard for,

01:47:10 particularly hard for humans.

01:47:12 Hard for humans to learn, hard for humans to excel at.

01:47:15 And so it was another measure, a measure of intelligence.

01:47:21 It’s very cool.

01:47:22 I mean, it’s very interesting what they did.

01:47:25 And I loved how they solved the data problem,

01:47:27 which again, they bootstrapped it

01:47:29 and got the machine to play itself,

01:47:30 to generate enough data to learn from.

01:47:32 I think that was brilliant.

01:47:33 I think that was great.

01:47:35 And of course, the result speaks for itself.

01:47:38 I think it makes us think about,

01:47:40 again, it is, okay, what’s intelligence?

01:47:42 What aspects of intelligence are important?

01:47:45 Can the Go machine help me make me a better Go player?

01:47:49 Is it an alien intelligence?

01:47:51 Am I even capable of,

01:47:53 like again, if we put in very simple terms,

01:47:56 it found the function, it found the Go function.

01:47:59 Can I even comprehend the Go function?

01:48:00 Can I talk about the Go function?

01:48:02 Can I conceptualize the Go function,

01:48:03 like whatever it might be?

01:48:05 So one of the interesting ideas of that system

01:48:08 is that it plays against itself, right?

01:48:10 But there’s no human in the loop there.

01:48:12 So like you’re saying, it could have by itself

01:48:16 created an alien intelligence.

01:48:18 How?

01:48:19 Toward a Go, imagine you’re sentencing,

01:48:21 you’re a judge and you’re sentencing people,

01:48:24 or you’re setting policy,

01:48:26 or you’re making medical decisions,

01:48:31 and you can’t explain,

01:48:33 you can’t get anybody to understand

01:48:34 what you’re doing or why.

01:48:37 So it’s an interesting dilemma

01:48:40 for the applications of AI.

01:48:43 Do we hold AI to this accountability

01:48:47 that says humans have to be willing

01:48:51 to take responsibility for the decision?

01:48:56 In other words, can you explain why you would do the thing?

01:48:58 Will you get up and speak to other humans

01:49:02 and convince them that this was a smart decision?

01:49:04 Is the AI enabling you to do that?

01:49:07 Can you get behind the logic that was made there?

01:49:10 Do you think, sorry to land on this point,

01:49:13 because it’s a fascinating one.

01:49:15 It’s a great goal for AI.

01:49:17 Do you think it’s achievable in many cases?

01:49:21 Or, okay, there’s two possible worlds

01:49:23 that we have in the future.

01:49:25 One is where AI systems do like medical diagnosis

01:49:28 or things like that, or drive a car

01:49:32 without ever explaining to you why it fails when it does.

01:49:36 That’s one possible world and we’re okay with it.

01:49:40 Or the other where we are not okay with it

01:49:42 and we really hold back the technology

01:49:45 from getting too good before it’s able to explain.

01:49:48 Which of those worlds are more likely, do you think,

01:49:50 and which are concerning to you or not?

01:49:53 I think the reality is it’s gonna be a mix.

01:49:56 I’m not sure I have a problem with that.

01:49:57 I mean, I think there are tasks that are perfectly fine

01:49:59 with machines show a certain level of performance

01:50:03 and that level of performance is already better than humans.

01:50:07 So for example, I don’t know that I take driverless cars.

01:50:11 If driverless cars learn how to be more effective drivers

01:50:14 than humans but can’t explain what they’re doing,

01:50:16 but bottom line, statistically speaking,

01:50:19 they’re 10 times safer than humans,

01:50:22 I don’t know that I care.

01:50:24 I think when we have these edge cases

01:50:27 when something bad happens and we wanna decide

01:50:29 who’s liable for that thing and who made that mistake

01:50:32 and what do we do about that?

01:50:33 And I think those edge cases are interesting cases.

01:50:36 And now do we go to designers of the AI

01:50:38 and the AI says, I don’t know if that’s what it learned

01:50:40 to do and it says, well, you didn’t train it properly.

01:50:43 You were negligent in the training data

01:50:46 that you gave that machine.

01:50:47 Like, how do we drive down the reliability?

01:50:49 So I think those are interesting questions.

01:50:53 So the optimization problem there, sorry,

01:50:55 is to create an AI system that’s able

01:50:56 to explain the lawyers away.

01:51:00 There you go.

01:51:01 I think it’s gonna be interesting.

01:51:04 I mean, I think this is where technology

01:51:05 and social discourse are gonna get like deeply intertwined

01:51:09 and how we start thinking about problems, decisions

01:51:12 and problems like that.

01:51:13 I think in other cases it becomes more obvious

01:51:15 where it’s like, why did you decide

01:51:20 to give that person a longer sentence or deny them parole?

01:51:27 Again, policy decisions or why did you pick that treatment?

01:51:30 Like that treatment ended up killing that guy.

01:51:32 Like, why was that a reasonable choice to make?

01:51:36 And people are gonna demand explanations.

01:51:40 Now there’s a reality though here.

01:51:43 And the reality is that it’s not,

01:51:45 I’m not sure humans are making reasonable choices

01:51:48 when they do these things.

01:51:49 They are using statistical hunches, biases,

01:51:54 or even systematically using statistical averages

01:51:58 to make calls.

01:51:59 This is what happened to my dad

01:52:00 and if you saw the talk I gave about that.

01:52:01 But they decided that my father was brain dead.

01:52:07 He had went into cardiac arrest

01:52:09 and it took a long time for the ambulance to get there

01:52:12 and he was not resuscitated right away and so forth.

01:52:14 And they came and they told me he was brain dead

01:52:16 and why was he brain dead?

01:52:17 Because essentially they gave me

01:52:19 a purely statistical argument under these conditions

01:52:22 with these four features, 98% chance he’s brain dead.

01:52:25 I said, but can you just tell me not inductively,

01:52:28 but deductively go there and tell me

01:52:30 his brain’s not functioning is the way for you to do that.

01:52:32 And the protocol in response was,

01:52:35 no, this is how we make this decision.

01:52:37 I said, this is inadequate for me.

01:52:39 I understand the statistics and I don’t know how,

01:52:43 there’s a 2% chance he’s still alive.

01:52:44 I just don’t know the specifics.

01:52:46 I need the specifics of this case

01:52:49 and I want the deductive logical argument

01:52:51 about why you actually know he’s brain dead.

01:52:53 So I wouldn’t sign the do not resuscitate.

01:52:55 And I don’t know, it was like they went through

01:52:57 lots of procedures, it was a big long story,

01:53:00 but the bottom was a fascinating story by the way,

01:53:02 but how I reasoned and how the doctors reasoned

01:53:04 through this whole process.

01:53:05 But I don’t know, somewhere around 24 hours later

01:53:07 or something, he was sitting up in bed

01:53:09 with zero brain damage.

01:53:13 I mean, what lessons do you draw from that story,

01:53:18 that experience?

01:53:19 That the data that’s being used

01:53:22 to make statistical inferences

01:53:24 doesn’t adequately reflect the phenomenon.

01:53:26 So in other words, you’re getting shit wrong,

01:53:28 I’m sorry, but you’re getting stuff wrong

01:53:31 because your model is not robust enough

01:53:35 and you might be better off not using statistical inference

01:53:41 and statistical averages in certain cases

01:53:43 when you know the model’s insufficient

01:53:45 and that you should be reasoning about the specific case

01:53:48 more logically and more deductibly

01:53:51 and hold yourself responsible

01:53:52 and hold yourself accountable to doing that.

01:53:55 And perhaps AI has a role to say the exact thing

01:53:59 what you just said, which is perhaps this is a case

01:54:02 you should think for yourself,

01:54:05 you should reason deductively.

01:54:08 Well, so it’s hard because it’s hard to know that.

01:54:14 You’d have to go back and you’d have to have enough data

01:54:17 to essentially say, and this goes back to how do we,

01:54:20 this goes back to the case of how do we decide

01:54:22 whether the AI is good enough to do a particular task

01:54:25 and regardless of whether or not

01:54:27 it produces an explanation.

01:54:30 And what standard do we hold for that?

01:54:34 So if you look more broadly, for example,

01:54:42 as my father, as a medical case,

01:54:48 the medical system ultimately helped him a lot

01:54:50 throughout his life, without it,

01:54:52 he probably would have died much sooner.

01:54:55 So overall, it sort of worked for him

01:54:58 in sort of a net, net kind of way.

01:55:02 Actually, I don’t know that that’s fair.

01:55:04 But maybe not in that particular case, but overall,

01:55:07 like the medical system overall does more good than bad.

01:55:10 Yeah, the medical system overall

01:55:12 was doing more good than bad.

01:55:14 Now, there’s another argument that suggests

01:55:16 that wasn’t the case, but for the sake of argument,

01:55:18 let’s say like that’s, let’s say a net positive.

01:55:21 And I think you have to sit there and there

01:55:22 and take that into consideration.

01:55:24 Now you look at a particular use case,

01:55:26 like for example, making this decision,

01:55:29 have you done enough studies to know

01:55:33 how good that prediction really is?

01:55:37 And have you done enough studies to compare it,

01:55:40 to say, well, what if we dug in in a more direct,

01:55:45 let’s get the evidence, let’s do the deductive thing

01:55:47 and not use statistics here,

01:55:49 how often would that have done better?

01:55:52 So you have to do the studies

01:55:53 to know how good the AI actually is.

01:55:56 And it’s complicated because it depends how fast

01:55:58 you have to make the decision.

01:55:59 So if you have to make the decision super fast,

01:56:02 you have no choice.

01:56:04 If you have more time, right?

01:56:06 But if you’re ready to pull the plug,

01:56:09 and this is a lot of the argument that I had with a doctor,

01:56:11 I said, what’s he gonna do if you do it,

01:56:13 what’s gonna happen to him in that room if you do it my way?

01:56:16 You know, well, he’s gonna die anyway.

01:56:18 So let’s do it my way then.

01:56:20 I mean, it raises questions for our society

01:56:22 to struggle with, as the case with your father,

01:56:26 but also when things like race and gender

01:56:28 start coming into play when certain,

01:56:31 when judgments are made based on things

01:56:35 that are complicated in our society,

01:56:39 at least in the discourse.

01:56:40 And it starts, you know, I think I’m safe to say

01:56:43 that most of the violent crimes committed

01:56:46 by males, so if you discriminate based,

01:56:51 you know, it’s a male versus female saying that

01:56:53 if it’s a male, more likely to commit the crime.

01:56:56 This is one of my very positive and optimistic views

01:57:01 of why the study of artificial intelligence,

01:57:05 the process of thinking and reasoning logically

01:57:08 and statistically, and how to combine them

01:57:10 is so important for the discourse today,

01:57:12 because it’s causing a, regardless of what state AI devices

01:57:17 are or not, it’s causing this dialogue to happen.

01:57:22 This is one of the most important dialogues

01:57:24 that in my view, the human species can have right now,

01:57:28 which is how to think well, how to reason well,

01:57:33 how to understand our own cognitive biases

01:57:39 and what to do about them.

01:57:40 That has got to be one of the most important things

01:57:43 we as a species can be doing, honestly.

01:57:47 We are, we’ve created an incredibly complex society.

01:57:51 We’ve created amazing abilities to amplify noise faster

01:57:56 than we can amplify signal.

01:57:59 We are challenged.

01:58:01 We are deeply, deeply challenged.

01:58:03 We have, you know, big segments of the population

01:58:06 getting hit with enormous amounts of information.

01:58:08 Do they know how to do critical thinking?

01:58:10 Do they know how to objectively reason?

01:58:14 Do they understand what they are doing,

01:58:16 nevermind what their AI is doing?

01:58:19 This is such an important dialogue to be having.

01:58:23 And, you know, we are fundamentally,

01:58:26 our thinking can be and easily becomes fundamentally bias.

01:58:31 And there are statistics and we shouldn’t blind our,

01:58:34 we shouldn’t discard statistical inference,

01:58:37 but we should understand the nature

01:58:39 of statistical inference.

01:58:40 As a society, as you know,

01:58:44 we decide to reject statistical inference

01:58:48 to favor understanding and deciding on the individual.

01:58:55 Yes.

01:58:57 We consciously make that choice.

01:59:00 So even if the statistics said,

01:59:04 even if the statistics said males are more likely to have,

01:59:08 you know, to be violent criminals,

01:59:09 we still take each person as an individual

01:59:12 and we treat them based on the logic

01:59:16 and the knowledge of that situation.

01:59:20 We purposefully and intentionally

01:59:24 reject the statistical inference.

01:59:28 We do that out of respect for the individual.

01:59:31 For the individual.

01:59:32 Yeah, and that requires reasoning and thinking.

01:59:34 Correct.

01:59:35 Looking forward, what grand challenges

01:59:37 would you like to see in the future?

01:59:38 Because the Jeopardy challenge, you know,

01:59:43 captivated the world.

01:59:45 AlphaGo, AlphaZero captivated the world.

01:59:48 Deep Blue certainly beating Kasparov.

01:59:51 Gary’s bitterness aside captivated the world.

01:59:55 What do you think, do you have ideas

01:59:57 for next grand challenges for future challenges of that?

02:00:00 You know, look, I mean, I think there are lots

02:00:03 of really great ideas for grand challenges.

02:00:05 I’m particularly focused on one right now,

02:00:08 which is, you know, can you demonstrate

02:00:11 that they understand, that they could read and understand,

02:00:14 that they can acquire these frameworks

02:00:18 and communicate, you know,

02:00:19 reason and communicate with humans.

02:00:21 So it is kind of like the Turing test,

02:00:23 but it’s a little bit more demanding than the Turing test.

02:00:26 It’s not enough to convince me that you might be human

02:00:31 because you could, you know, you can parrot a conversation.

02:00:34 I think, you know, the standard is a little bit higher,

02:00:38 is for example, can you, you know, the standard is higher.

02:00:43 And I think one of the challenges

02:00:45 of devising this grand challenge is that we’re not sure

02:00:51 what intelligence is, we’re not sure how to determine

02:00:56 whether or not two people actually understand each other

02:00:59 and in what depth they understand it, you know,

02:01:02 to what depth they understand each other.

02:01:04 So the challenge becomes something along the lines of,

02:01:08 can you satisfy me that we have a shared understanding?

02:01:14 So if I were to probe and probe and you probe me,

02:01:18 can machines really act like thought partners

02:01:23 where they can satisfy me that we have a shared,

02:01:27 our understanding is shared enough

02:01:29 that we can collaborate and produce answers together

02:01:33 and that, you know, they can help me explain

02:01:35 and justify those answers.

02:01:36 So maybe here’s an idea.

02:01:38 So we’ll have AI system run for president and convince.

02:01:44 That’s too easy.

02:01:46 I’m sorry, go ahead.

02:01:46 Well, no, you have to convince the voters

02:01:49 that they should vote.

02:01:51 So like, I guess what does winning look like?

02:01:53 Again, that’s why I think this is such a challenge

02:01:55 because we go back to the emotional persuasion.

02:01:59 We go back to, you know, now we’re checking off an aspect

02:02:06 of human cognition that is in many ways weak or flawed,

02:02:11 right, we’re so easily manipulated.

02:02:13 Our minds are drawn for often the wrong reasons, right?

02:02:19 Not the reasons that ultimately matter to us,

02:02:21 but the reasons that can easily persuade us.

02:02:23 I think we can be persuaded to believe one thing or another

02:02:28 for reasons that ultimately don’t serve us well

02:02:31 in the longterm.

02:02:33 And a good benchmark should not play with those elements

02:02:38 of emotional manipulation.

02:02:40 I don’t think so.

02:02:41 And I think that’s where we have to set the higher standard

02:02:44 for ourselves of what, you know, what does it mean?

02:02:47 This goes back to rationality

02:02:48 and it goes back to objective thinking.

02:02:50 And can you produce, can you acquire information

02:02:53 and produce reasoned arguments

02:02:54 and to those reasoned arguments

02:02:56 pass a certain amount of muster and is it,

02:03:00 and can you acquire new knowledge?

02:03:02 You know, can you, for example, can you reason,

02:03:06 I have acquired new knowledge,

02:03:07 can you identify where it’s consistent or contradictory

02:03:11 with other things you’ve learned?

02:03:12 And can you explain that to me

02:03:14 and get me to understand that?

02:03:15 So I think another way to think about it perhaps

02:03:18 is can a machine teach you, can it help you understand

02:03:31 something that you didn’t really understand before

02:03:35 where it’s taking you, so you’re not,

02:03:39 again, it’s almost like can it teach you,

02:03:41 can it help you learn and in an arbitrary space

02:03:46 so it can open those domain space?

02:03:49 So can you tell the machine, and again,

02:03:50 this borrows from some science fiction,

02:03:52 but can you go off and learn about this topic

02:03:55 that I’d like to understand better

02:03:58 and then work with me to help me understand it?

02:04:02 That’s quite brilliant.

02:04:03 What, the machine that passes that kind of test,

02:04:06 do you think it would need to have self awareness

02:04:11 or even consciousness?

02:04:13 What do you think about consciousness

02:04:16 and the importance of it maybe in relation to having a body,

02:04:21 having a presence, an entity?

02:04:24 Do you think that’s important?

02:04:26 You know, people used to ask me if Watson was conscious

02:04:28 and I used to say, he’s conscious of what exactly?

02:04:32 I mean, I think, you know, maybe it depends

02:04:34 what it is that you’re conscious of.

02:04:36 I mean, like, so, you know, did it, if you, you know,

02:04:39 it’s certainly easy for it to answer questions

02:04:42 about, it would be trivial to program it

02:04:44 to answer questions about whether or not

02:04:46 it was playing Jeopardy.

02:04:47 I mean, it could certainly answer questions

02:04:48 that would imply that it was aware of things.

02:04:51 Exactly, what does it mean to be aware

02:04:52 and what does it mean to be conscious of?

02:04:53 It’s sort of interesting.

02:04:54 I mean, I think that we differ from one another

02:04:57 based on what we’re conscious of.

02:05:01 But wait, wait a minute, yes, for sure.

02:05:02 There’s degrees of consciousness in there, so.

02:05:05 Well, and there’s just areas.

02:05:06 Like, it’s not just degrees, what are you aware of?

02:05:10 Like, what are you not aware of?

02:05:11 But nevertheless, there’s a very subjective element

02:05:13 to our experience.

02:05:16 Let me even not talk about consciousness.

02:05:18 Let me talk about another, to me,

02:05:21 really interesting topic of mortality, fear of mortality.

02:05:25 Watson, as far as I could tell,

02:05:29 did not have a fear of death.

02:05:32 Certainly not.

02:05:33 Most, most humans do.

02:05:36 Wasn’t conscious of death.

02:05:39 It wasn’t, yeah.

02:05:40 So there’s an element of finiteness to our existence

02:05:44 that I think, like you mentioned, survival,

02:05:47 that adds to the whole thing.

02:05:49 I mean, consciousness is tied up with that,

02:05:50 that we are a thing.

02:05:52 It’s a subjective thing that ends.

02:05:56 And that seems to add a color and flavor

02:05:59 to our motivations in a way

02:06:00 that seems to be fundamentally important for intelligence,

02:06:05 or at least the kind of human intelligence.

02:06:07 Well, I think for generating goals, again,

02:06:10 I think you could have,

02:06:12 you could have an intelligence capability

02:06:14 and a capability to learn, a capability to predict.

02:06:18 But I think without,

02:06:22 I mean, again, you get fear,

02:06:23 but essentially without the goal to survive.

02:06:27 So you think you can just encode that

02:06:29 without having to really?

02:06:30 I think you could encode.

02:06:31 I mean, you could create a robot now,

02:06:32 and you could say, you know, plug it in,

02:06:36 and say, protect your power source, you know,

02:06:38 and give it some capabilities,

02:06:39 and it’ll sit there and operate

02:06:40 to try to protect its power source and survive.

02:06:42 I mean, so I don’t know

02:06:44 that that’s philosophically a hard thing to demonstrate.

02:06:46 It sounds like a fairly easy thing to demonstrate

02:06:48 that you can give it that goal.

02:06:50 Will it come up with that goal by itself?

02:06:52 I think you have to program that goal in.

02:06:54 But there’s something,

02:06:56 because I think, as we touched on,

02:06:58 intelligence is kind of like a social construct.

02:07:01 The fact that a robot will be protecting its power source

02:07:07 would add depth and grounding to its intelligence

02:07:12 in terms of us being able to respect it.

02:07:15 I mean, ultimately, it boils down to us acknowledging

02:07:18 that it’s intelligent.

02:07:20 And the fact that it can die,

02:07:23 I think, is an important part of that.

02:07:26 The interesting thing to reflect on

02:07:27 is how trivial that would be.

02:07:29 And I don’t think, if you knew how trivial that was,

02:07:32 you would associate that with being intelligence.

02:07:35 I mean, I literally put in a statement of code

02:07:37 that says you have the following actions you can take.

02:07:40 You give it a bunch of actions,

02:07:41 like maybe you mount a laser gun on it,

02:07:44 or you give it the ability to scream or screech or whatever.

02:07:48 And you say, if you see your power source threatened,

02:07:52 then you could program that in,

02:07:53 and you’re gonna take these actions to protect it.

02:07:58 You know, you could train it on a bunch of things.

02:08:02 So, and now you’re gonna look at that and you say,

02:08:04 well, you know, that’s intelligence,

02:08:05 which is protecting its power source?

02:08:06 Maybe, but that’s, again, this human bias that says,

02:08:10 the thing I identify, my intelligence and my conscious,

02:08:14 so fundamentally with the desire,

02:08:16 or at least the behaviors associated

02:08:18 with the desire to survive,

02:08:21 that if I see another thing doing that,

02:08:24 I’m going to assume it’s intelligent.

02:08:27 What timeline, year,

02:08:29 will society have something that would,

02:08:34 that you would be comfortable calling

02:08:36 an artificial general intelligence system?

02:08:39 Well, what’s your intuition?

02:08:41 Nobody can predict the future,

02:08:42 certainly not the next few months or 20 years away,

02:08:46 but what’s your intuition?

02:08:47 How far away are we?

02:08:50 I don’t know.

02:08:50 It’s hard to make these predictions.

02:08:52 I mean, I would be guessing,

02:08:54 and there’s so many different variables,

02:08:57 including just how much we want to invest in it

02:08:59 and how important we think it is,

02:09:03 what kind of investment we’re willing to make in it,

02:09:06 what kind of talent we end up bringing to the table,

02:09:08 the incentive structure, all these things.

02:09:10 So I think it is possible to do this sort of thing.

02:09:15 I think it’s, I think trying to sort of

02:09:20 ignore many of the variables and things like that,

02:09:23 is it a 10 year thing, is it a 23 year?

02:09:25 Probably closer to a 20 year thing, I guess.

02:09:27 But not several hundred years.

02:09:29 No, I don’t think it’s several hundred years.

02:09:32 I don’t think it’s several hundred years.

02:09:33 But again, so much depends on how committed we are

02:09:38 to investing and incentivizing this type of work.

02:09:43 And it’s sort of interesting.

02:09:45 Like, I don’t think it’s obvious how incentivized we are.

02:09:50 I think from a task perspective,

02:09:53 if we see business opportunities to take this technique

02:09:57 or that technique to solve that problem,

02:09:59 I think that’s the main driver for many of these things.

02:10:03 From a general intelligence,

02:10:05 it’s kind of an interesting question.

02:10:06 Are we really motivated to do that?

02:10:09 And like, we just struggled ourselves right now

02:10:12 to even define what it is.

02:10:14 So it’s hard to incentivize

02:10:16 when we don’t even know what it is

02:10:17 we’re incentivized to create.

02:10:18 And if you said mimic a human intelligence,

02:10:23 I just think there are so many challenges

02:10:25 with the significance and meaning of that.

02:10:27 That there’s not a clear directive.

02:10:29 There’s no clear directive to do precisely that thing.

02:10:32 So assistance in a larger and larger number of tasks.

02:10:36 So being able to,

02:10:38 a system that’s particularly able to operate my microwave

02:10:41 and making a grilled cheese sandwich.

02:10:42 I don’t even know how to make one of those.

02:10:44 And then the same system will be doing the vacuum cleaning.

02:10:48 And then the same system would be teaching

02:10:53 my kids that I don’t have math.

02:10:56 I think that when you get into a general intelligence

02:11:00 for learning physical tasks,

02:11:04 and again, I wanna go back to your body question

02:11:06 because I think your body question was interesting,

02:11:07 but you wanna go back to learning the abilities

02:11:11 to physical tasks.

02:11:11 You might have, we might get,

02:11:14 I imagine in that timeframe,

02:11:16 we will get better and better at learning these kinds

02:11:18 of tasks, whether it’s mowing your lawn

02:11:20 or driving a car or whatever it is.

02:11:22 I think we will get better and better at that

02:11:24 where it’s learning how to make predictions

02:11:25 over large bodies of data.

02:11:27 I think we’re gonna continue to get better

02:11:28 and better at that.

02:11:30 And machines will outpace humans

02:11:33 in a variety of those things.

02:11:35 The underlying mechanisms for doing that may be the same,

02:11:40 meaning that maybe these are deep nets,

02:11:43 there’s infrastructure to train them,

02:11:46 reusable components to get them to do different classes

02:11:49 of tasks, and we get better and better

02:11:51 at building these kinds of machines.

02:11:53 You could argue that the general learning infrastructure

02:11:56 in there is a form of a general type of intelligence.

02:12:01 I think what starts getting harder is this notion of,

02:12:06 can we effectively communicate and understand

02:12:09 and build that shared understanding?

02:12:10 Because of the layers of interpretation that are required

02:12:13 to do that, and the need for the machine to be engaged

02:12:16 with humans at that level in a continuous basis.

02:12:20 So how do you get the machine in the game?

02:12:23 How do you get the machine in the intellectual game?

02:12:26 Yeah, and to solve AGI,

02:12:29 you probably have to solve that problem.

02:12:31 You have to get the machine,

02:12:31 so it’s a little bit of a bootstrapping thing.

02:12:33 Can we get the machine engaged in the intellectual game,

02:12:39 but in the intellectual dialogue with the humans?

02:12:42 Are the humans sufficiently in intellectual dialogue

02:12:44 with each other to generate enough data in this context?

02:12:49 And how do you bootstrap that?

02:12:51 Because every one of those conversations,

02:12:54 every one of those conversations,

02:12:55 those intelligent interactions,

02:12:58 require so much prior knowledge

02:12:59 that it’s a challenge to bootstrap it.

02:13:01 So the question is, and how committed?

02:13:05 So I think that’s possible, but when I go back to,

02:13:08 are we incentivized to do that?

02:13:10 I know we’re incentivized to do the former.

02:13:13 Are we incentivized to do the latter significantly enough?

02:13:15 Do people understand what the latter really is well enough?

02:13:18 Part of the elemental cognition mission

02:13:20 is to try to articulate that better and better

02:13:23 through demonstrations

02:13:24 and through trying to craft these grand challenges

02:13:26 and get people to say, look,

02:13:28 this is a class of intelligence.

02:13:30 This is a class of AI.

02:13:31 Do we want this?

02:13:33 What is the potential of this?

02:13:35 What’s the business potential?

02:13:37 What’s the societal potential to that?

02:13:40 And to build up that incentive system around that.

02:13:45 Yeah, I think if people don’t understand yet,

02:13:46 I think they will.

02:13:47 I think there’s a huge business potential here.

02:13:49 So it’s exciting that you’re working on it.

02:13:54 We kind of skipped over,

02:13:54 but I’m a huge fan of physical presence of things.

02:13:59 Do you think Watson had a body?

02:14:03 Do you think having a body adds to the interactive element

02:14:08 between the AI system and a human,

02:14:11 or just in general to intelligence?

02:14:14 So I think going back to that shared understanding bit,

02:14:19 humans are very connected to their bodies.

02:14:21 I mean, one of the challenges in getting an AI

02:14:26 to kind of be a compatible human intelligence

02:14:29 is that our physical bodies are generating a lot of features

02:14:33 that make up the input.

02:14:37 So in other words, our bodies are the tool

02:14:40 we use to affect output,

02:14:42 but they also generate a lot of input for our brains.

02:14:46 So we generate emotion, we generate all these feelings,

02:14:49 we generate all these signals that machines don’t have.

02:14:52 So machines don’t have this as the input data

02:14:56 and they don’t have the feedback that says,

02:14:58 I’ve gotten this emotion or I’ve gotten this idea,

02:15:02 I now want to process it,

02:15:04 and then it then affects me as a physical being,

02:15:08 and I can play that out.

02:15:12 In other words, I could realize the implications of that,

02:15:14 implications again, on my mind body complex,

02:15:17 I then process that, and the implications again,

02:15:19 our internal features are generated, I learn from them,

02:15:23 they have an effect on my mind body complex.

02:15:26 So it’s interesting when we think,

02:15:28 do we want a human intelligence?

02:15:30 Well, if we want a human compatible intelligence,

02:15:33 probably the best thing to do

02:15:34 is to embed it in a human body.

02:15:36 Just to clarify, and both concepts are beautiful,

02:15:39 is humanoid robots, so robots that look like humans is one,

02:15:45 or did you mean actually sort of what Elon Musk

02:15:50 was working with Neuralink,

02:15:52 really embedding intelligence systems

02:15:55 to ride along human bodies?

02:15:59 No, I mean riding along is different.

02:16:01 I meant like if you want to create an intelligence

02:16:05 that is human compatible,

02:16:08 meaning that it can learn and develop

02:16:10 a shared understanding of the world around it,

02:16:13 you have to give it a lot of the same substrate.

02:16:15 Part of that substrate is the idea

02:16:18 that it generates these kinds of internal features,

02:16:21 like sort of emotional stuff, it has similar senses,

02:16:24 it has to do a lot of the same things

02:16:25 with those same senses, right?

02:16:28 So I think if you want that,

02:16:29 again, I don’t know that you want that.

02:16:32 That’s not my specific goal,

02:16:34 I think that’s a fascinating scientific goal,

02:16:35 I think it has all kinds of other implications.

02:16:37 That’s sort of not the goal.

02:16:39 I want to create, I think of it

02:16:41 as I create intellectual thought partners for humans,

02:16:44 so that kind of intelligence.

02:16:47 I know there are other companies

02:16:48 that are creating physical thought partners,

02:16:50 physical partners for humans,

02:16:52 but that’s kind of not where I’m at.

02:16:56 But the important point is that a big part

02:17:00 of what we process is that physical experience

02:17:06 of the world around us.

02:17:08 On the point of thought partners,

02:17:10 what role does an emotional connection,

02:17:13 or forgive me, love, have to play

02:17:17 in that thought partnership?

02:17:19 Is that something you’re interested in,

02:17:22 put another way, sort of having a deep connection,

02:17:26 beyond intellectual?

02:17:29 With the AI?

02:17:30 Yeah, with the AI, between human and AI.

02:17:32 Is that something that gets in the way

02:17:34 of the rational discourse?

02:17:37 Is that something that’s useful?

02:17:39 I worry about biases, obviously.

02:17:41 So in other words, if you develop an emotional relationship

02:17:44 with a machine, all of a sudden you start,

02:17:46 are more likely to believe what it’s saying,

02:17:48 even if it doesn’t make any sense.

02:17:50 So I worry about that.

02:17:53 But at the same time,

02:17:54 I think the opportunity to use machines

02:17:56 to provide human companionship is actually not crazy.

02:17:59 And intellectual and social companionship

02:18:04 is not a crazy idea.

02:18:06 Do you have concerns, as a few people do,

02:18:09 Elon Musk, Sam Harris,

02:18:11 about long term existential threats of AI,

02:18:15 and perhaps short term threats of AI?

02:18:18 We talked about bias,

02:18:19 we talked about different misuses,

02:18:21 but do you have concerns about thought partners,

02:18:25 systems that are able to help us make decisions

02:18:28 together as humans,

02:18:29 somehow having a significant negative impact

02:18:31 on society in the long term?

02:18:33 I think there are things to worry about.

02:18:35 I think giving machines too much leverage is a problem.

02:18:41 And what I mean by leverage is,

02:18:44 is too much control over things that can hurt us,

02:18:47 whether it’s socially, psychologically, intellectually,

02:18:50 or physically.

02:18:51 And if you give the machines too much control,

02:18:53 I think that’s a concern.

02:18:54 You forget about the AI,

02:18:56 just once you give them too much control,

02:18:58 human bad actors can hack them and produce havoc.

02:19:04 So that’s a problem.

02:19:07 And you’d imagine hackers taking over

02:19:10 the driverless car network

02:19:11 and creating all kinds of havoc.

02:19:15 But you could also imagine given the ease

02:19:19 at which humans could be persuaded one way or the other,

02:19:22 and now we have algorithms that can easily take control

02:19:25 over that and amplify noise

02:19:29 and move people one direction or another.

02:19:32 I mean, humans do that to other humans all the time.

02:19:34 And we have marketing campaigns,

02:19:35 we have political campaigns that take advantage

02:19:38 of our emotions or our fears.

02:19:41 And this is done all the time.

02:19:44 But with machines, machines are like giant megaphones.

02:19:47 We can amplify this in orders of magnitude

02:19:50 and fine tune its control so we can tailor the message.

02:19:54 We can now very rapidly and efficiently tailor the message

02:19:58 to the audience, taking advantage of their biases

02:20:04 and amplifying them and using them to persuade them

02:20:06 in one direction or another in ways that are not fair,

02:20:10 not logical, not objective, not meaningful.

02:20:13 And humans, machines empower that.

02:20:17 So that’s what I mean by leverage.

02:20:18 Like it’s not new, but wow, it’s powerful

02:20:22 because machines can do it more effectively,

02:20:24 more quickly and we see that already going on

02:20:27 in social media and other places.

02:20:31 That’s scary.

02:20:33 And that’s why I go back to saying one of the most important

02:20:38 That’s why I go back to saying one of the most important

02:20:42 public dialogues we could be having

02:20:45 is about the nature of intelligence

02:20:47 and the nature of inference and logic

02:20:52 and reason and rationality and us understanding

02:20:56 our own biases, us understanding our own cognitive biases

02:20:59 and how they work and then how machines work

02:21:03 and how do we use them to compliment basically

02:21:06 so that in the end we have a stronger overall system.

02:21:09 That’s just incredibly important.

02:21:13 I don’t think most people understand that.

02:21:15 So like telling your kids or telling your students,

02:21:20 this goes back to the cognition.

02:21:22 Here’s how your brain works.

02:21:24 Here’s how easy it is to trick your brain, right?

02:21:28 There are fundamental cognitive,

02:21:29 you should appreciate the different types of thinking

02:21:34 and how they work and what you’re prone to

02:21:36 and what do you prefer?

02:21:40 And under what conditions does this make sense

02:21:42 versus does that make sense?

02:21:43 And then say, here’s what AI can do.

02:21:46 Here’s how it can make this worse

02:21:48 and here’s how it can make this better.

02:21:51 And then that’s where the AI has a role

02:21:52 is to reveal that trade off.

02:21:56 So if you imagine a system that is able to

02:22:00 beyond any definition of the Turing test to the benchmark,

02:22:06 really an AGI system as a thought partner

02:22:10 that you one day will create,

02:22:14 what question, what topic of discussion,

02:22:19 if you get to pick one, would you have with that system?

02:22:23 What would you ask and you get to find out

02:22:28 the truth together?

02:22:33 So you threw me a little bit with finding the truth

02:22:36 at the end, but because the truth is a whole nother topic.

02:22:41 But I think the beauty of it,

02:22:43 I think what excites me is the beauty of it is

02:22:46 if I really have that system, I don’t have to pick.

02:22:48 So in other words, I can go to and say,

02:22:51 this is what I care about today.

02:22:54 And that’s what we mean by like this general capability,

02:22:57 go out, read this stuff in the next three milliseconds.

02:23:00 And I wanna talk to you about it.

02:23:02 I wanna draw analogies, I wanna understand

02:23:05 how this affects this decision or that decision.

02:23:08 What if this were true?

02:23:09 What if that were true?

02:23:10 What knowledge should I be aware of

02:23:13 that could impact my decision?

02:23:16 Here’s what I’m thinking is the main implication.

02:23:18 Can you prove that out?

02:23:21 Can you give me the evidence that supports that?

02:23:23 Can you give me evidence that supports this other thing?

02:23:25 Boy, would that be incredible?

02:23:27 Would that be just incredible?

02:23:28 Just a long discourse.

02:23:30 Just to be part of whether it’s a medical diagnosis

02:23:33 or whether it’s the various treatment options

02:23:35 or whether it’s a legal case

02:23:38 or whether it’s a social problem

02:23:40 that people are discussing,

02:23:41 like be part of the dialogue,

02:23:43 one that holds itself and us accountable

02:23:49 to reasons and objective dialogue.

02:23:52 I get goosebumps talking about it, right?

02:23:54 It’s like, this is what I want.

02:23:57 So when you create it, please come back on the podcast

02:24:01 and we can have a discussion together

02:24:03 and make it even longer.

02:24:04 This is a record for the longest conversation

02:24:07 in the world.

02:24:08 It was an honor, it was a pleasure, David.

02:24:09 Thank you so much for talking to me.

02:24:10 Thanks so much, a lot of fun.