Douglas Lenat: Cyc and the Quest to Solve Common Sense Reasoning in AI #221

Transcript

00:00:00 The following is a conversation with Doug Lenit, creator of Psych, a system that for close to 40

00:00:06 years, and still today, has sought to solve the core problem of artificial intelligence,

00:00:12 the acquisition of common sense knowledge and the use of that knowledge to think,

00:00:18 to reason, and to understand the world. To support this podcast, please check out our sponsors in

00:00:23 the description. As a side note, let me say that in the excitement of the modern era of machine

00:00:29 learning, it is easy to forget just how little we understand exactly how to build the kind of

00:00:36 intelligence that matches the power of the human mind. To me, many of the core ideas behind Psych,

00:00:42 in some form, in actuality or in spirit, will likely be part of the AI system that achieves

00:00:48 general superintelligence. But perhaps more importantly, solving this problem of common

00:00:54 sense knowledge will help us humans understand our own minds, the nature of truth, and finally,

00:01:01 how to be more rational and more kind to each other. This is the Lex Friedman podcast,

00:01:07 and here is my conversation with Doug Lenit. Psych is a project launched by you in 1984,

00:01:16 and still is active today, whose goal is to assemble a knowledge base that spans the basic

00:01:20 concepts and rules about how the world works. In other words, it hopes to capture common sense

00:01:26 knowledge, which is a lot harder than it sounds. Can you elaborate on this mission and maybe

00:01:32 perhaps speak to the various subgoals within this mission? When I was a faculty member in the

00:01:39 computer science department at Stanford, my colleagues and I did research in all sorts of

00:01:46 artificial intelligence programs, so natural language understanding programs, robots,

00:01:53 expert systems, and so on. And we kept hitting the very same brick wall. Our systems would have

00:02:02 impressive early successes. And so if your only goal was academic, namely to get enough material

00:02:12 to write a journal article, that might actually suffice. But if you’re really trying to get AI,

00:02:19 then you have to somehow get past the brick wall. And the brick wall was

00:02:23 the programs didn’t have what we would call common sense. They didn’t have general world

00:02:28 knowledge. They didn’t really understand what they were doing, what they were saying,

00:02:33 what they were being asked. And so very much like a clever dog performing tricks,

00:02:40 we could get them to do tricks, but they never really understood what they were doing. Sort of

00:02:44 like when you get a dog to fetch your morning newspaper. The dog might do that successfully,

00:02:50 but the dog has no idea what a newspaper is or what it says or anything like that.

00:02:55 What does it mean to understand something? Can you maybe elaborate on that a little bit?

00:02:59 Is it is understanding action of like combining little things together like through inference,

00:03:05 or is understanding the wisdom you gain over time that forms a knowledge?

00:03:10 I think of understanding more like the ground you stand on, which could be very shaky,

00:03:20 could be very unsafe, but most of the time is not because underneath it is more ground,

00:03:28 and eventually rock and other things. But layer after layer after layer, that solid foundation

00:03:36 is there. And you rarely need to think about it, you rarely need to count on it, but occasionally

00:03:41 you do. And I’ve never used this analogy before, so bear with me. But I think the same thing is

00:03:48 true in terms of getting computers to understand things, which is you ask a computer a question,

00:03:56 for instance, Alexa or some robot or something, and maybe it gets the right answer.

00:04:02 But if you were asking that of a human, you could also say things like, why? Or how might you be

00:04:09 wrong about this? Or something like that. And the person would answer you. And it might be a little

00:04:17 annoying if you have a small child and they keep asking why questions in series. Eventually,

00:04:22 you get to the point where you throw up your hands and say, I don’t know, it’s just the way

00:04:25 the world is. But for many layers, you actually have that layered, solid foundation of support,

00:04:35 so that when you need it, you can count on it. And when do you need it? Well, when things are

00:04:40 unexpected, when you come up against a situation which is novel. For instance, when you’re driving,

00:04:46 it may be fine to have a small program, a small set of rules that cover 99% of the cases, but that

00:04:55 1% of the time when something strange happens, you really need to draw on common sense. For instance,

00:05:02 my wife and I were driving recently and there was a trash truck in front of us. And I guess they had

00:05:09 packed it too full and the back exploded. And trash bags went everywhere. And we had to make

00:05:17 a split second decision. Are we going to slam on our brakes? Are we going to swerve into another

00:05:21 lane? Are we going to just run it over? Because there were cars all around us. And in front of us

00:05:29 was a large trash bag. And we know what we throw away in trash bags, probably not a safe thing to

00:05:34 run over. Over on the left was a bunch of fast food restaurant trash bags. And it’s like,

00:05:42 oh, well, those things are just like styrofoam and leftover food. We’ll run over that. And so that

00:05:47 was a safe thing for us to do. Now, that’s the kind of thing that’s going to happen maybe once

00:05:52 in your life. But the point is that there’s almost no telling what little bits of knowledge about the

00:06:01 world you might actually need in some situations which were unforeseen. But see, when you sit on

00:06:08 that mountain or that ground that goes deep of knowledge in order to make a split second decision

00:06:16 about fast food trash or random trash from the back of a trash truck, you need to be able to

00:06:26 leverage that ground you stand on in some way. It’s not merely, you know, it’s not enough to just

00:06:31 have a lot of ground to stand on. It’s your ability to leverage it, to utilize in a split,

00:06:38 like integrate it all together to make that split second decision. And I suppose understanding isn’t

00:06:45 just having a common sense knowledge to access. It’s the act of accessing it somehow, like

00:06:55 correctly filtering out the parts of the knowledge that are not useful, selecting only the useful

00:07:02 parts and effectively making conclusive decisions. So let’s tease apart two different tasks really,

00:07:10 both of which are incredibly important and even necessary. If you’re going to have this in a

00:07:16 useful, usable fashion as opposed to say like library books sitting on a shelf and so on, where

00:07:25 the knowledge might be there, but if a fire comes, the books are going to burn because they don’t

00:07:31 know what’s in them and they’re just going to sit there while they burn. So there are two aspects of

00:07:38 using the knowledge. One is a kind of a theoretical, how is it possible at all? And then the second

00:07:45 aspect of what you said is, how can you do it quickly enough? So how can you do it at all is

00:07:51 something that philosophers have grappled with. And fortunately, philosophers 100 years ago and

00:07:58 even earlier developed a kind of formal language like English. It’s called predicate logic or first

00:08:10 order logic or something like predicate calculus and so on. So there’s a way of representing things

00:08:17 in this formal language which enables a mechanical procedure to sort of grind through

00:08:26 and algorithmically produce all of the same logical entailments, all the same logical conclusions

00:08:34 that you or I would from that same set of pieces of information that are represented that way.

00:08:41 So that sort of raises a couple questions. One is, how do you get all this information

00:08:48 from say observations and English and so on into this logical form? And secondly,

00:08:54 how can you then efficiently run these algorithms to actually get the information you need?

00:09:01 In the case I mentioned in a 10th of a second rather than say in 10 hours or 10,000 years

00:09:08 of computation. And those are both really important questions. And like a corollary

00:09:15 addition to the first one is, how many such things do you need to gather for it to be useful

00:09:22 in certain contexts? So like what, in order, you mentioned philosophers, in order to capture this

00:09:28 world and represent it in a logical way and with a form of logic, like how many statements are

00:09:36 required? Is it five? Is it 10? Is it 10 trillion? Is it like that? That’s as far as I understand is

00:09:43 probably still an open question. It may forever be an open question just to say like definitively

00:09:50 about, to describe the universe perfectly. How many facts do you need?

00:09:54 I guess I’m going to disappoint you by giving you an actual answer to your question.

00:10:00 Okay. Well, no, this sounds exciting.

00:10:03 Yes. Okay. So now we have like three things to talk about.

00:10:09 We’ll keep adding more.

00:10:10 Although it’s okay. The first and the third are related. So let’s leave the efficiency

00:10:16 question aside for now. So how does all this information get represented in logical form?

00:10:24 So that these algorithms, resolution theorem proving and other algorithms can actually grind

00:10:30 through all the logical consequences of what you said. And that ties into your question about how

00:10:37 many of these things do you need? Because if the answer is small enough, then by hand, you could

00:10:43 write them out one at a time. So in the early 1984, I held a meeting at Stanford where I was a

00:10:57 faculty member there, where we assembled about half a dozen of the smartest people I know.

00:11:05 People like Alan Newell and Marvin Minsky and Alan Kay and a few others.

00:11:15 Was Feynman there by chance? Because he commented about your system,

00:11:19 Eurisco, at the time.

00:11:20 No, he wasn’t part of this meeting.

00:11:23 That’s a heck of a meeting anyway.

00:11:25 I think Ed Feigenbaum was there. I think Josh Lederberg was there. So we have all these different

00:11:32 smart people. And we came together to address the question that you raised, which is, if it’s

00:11:41 important to represent common sense knowledge and world knowledge in order for AIs to not be

00:11:46 brittle, in order for AIs not to just have the veneer of intelligence, well, how many pieces

00:11:53 of common sense, how many if then rules, for instance, would we have to actually write in

00:11:59 order to essentially cover what people expect perfect strangers to already know about the world?

00:12:07 And I expected there would be an enormous divergence of opinion and computation. But

00:12:14 amazingly, everyone got an answer which was around a million. And one person got the answer

00:12:23 by saying, well, look, you can only burn into human long term memory a certain number of things

00:12:30 per unit time, like maybe one every 30 seconds or something. And other than that, it’s just short

00:12:36 term memory and it flows away like water and so on. So by the time you’re, say, 10 years old or so,

00:12:42 how many things could you possibly have burned into your long term memory? And it’s like about

00:12:47 a million. Another person went in a completely different direction and said, well, if you look

00:12:52 at the number of words in a dictionary, not a whole dictionary, but for someone to essentially

00:13:00 be considered to be fluent in a language, how many words would they need to know? And then

00:13:05 about how many things about each word would you have to tell it? And so they got to a million

00:13:10 that way. Another person said, well, let’s actually look at one single short, one volume

00:13:20 desk encyclopedia article. And so we’ll look at what was like a four paragraph article or

00:13:27 something. I think about grebes. Grebes are a type of waterfowl. And if we were going to sit there

00:13:34 and represent every single thing that was there, how many assertions or rules or statements would

00:13:41 we have to write in this logical language and so on and then multiply that by all of the number of

00:13:46 articles that there were and so on. So all of these estimates came out with a million. And so

00:13:53 if you do the math, it turns out that like, oh, well, then maybe in something like 100

00:14:01 person years in one or two person centuries, we could actually get this written down by hand.

00:14:09 And a marvelous coincidence, an opportunity existed right at that point in time, the early 1980s.

00:14:19 There was something called the Japanese fifth generation computing effort. Japan had threatened

00:14:25 to do in computing and AI and hardware what they had just finished doing in consumer electronics

00:14:32 and the automotive industry, namely resting control away from the United States and more

00:14:36 generally away from the West. And so America was scared and Congress did something. That’s how you

00:14:44 know it was a long time ago because Congress did something. Congress passed something called the

00:14:48 National Cooperative Research Act, NCRA. And what it said was, hey, all you big American companies,

00:14:55 that’s also how you know it was a long time ago because they were American companies rather than

00:14:59 multinational companies. Hey, all you big American companies, normally it would be an antitrust

00:15:05 violation if you colluded on R&D, but we promise for the next 10 years, we won’t prosecute any of

00:15:13 you if you do that to help combat this threat. And so overnight, the first two consortia,

00:15:20 research consortia in America sprang up, both of them coincidentally in Austin, Texas. One called

00:15:27 Semitech focusing on hardware chips and so on, and then one called MCC, the Microelectronics

00:15:34 and Computer Technology Corporation, focusing more on software, on databases and AI and natural

00:15:41 language understanding and things like that. And I got the opportunity, thanks to my friend Woody

00:15:48 Bledsoe, who was one of the people who founded that, to come and be its principal scientist.

00:15:54 And he sent Admiral Bob Inman, who was the person running MCC, came and talked to me and said,

00:16:03 look, professor, you’re talking about doing this project, it’s going to involve

00:16:08 centuries of effort. You’ve only got a handful of graduate students, you do the math, it’s going to

00:16:13 take you longer than the rest of your life to finish this project. But if you move to the wilds

00:16:20 of Austin, Texas, we’ll put 10 times as many people on it and you’ll be done in a few years.

00:16:27 And so that was pretty exciting. And so I did that. I took my leave from Stanford, I came to

00:16:34 Austin, I worked for MCC. And the good news and bad news, the bad news is that all of us were

00:16:40 off by an order of magnitude. That it turns out what you need are tens of millions of these

00:16:47 pieces of knowledge about every day, sort of like if you have a coffee cup with stuff in it and you

00:16:53 turn it upside down, the stuff in it’s going to fall out. So you need tens of millions of pieces

00:16:58 of knowledge like that, even if you take trouble to make each one as general as it possibly could

00:17:04 be. But the good news was that thanks to initially the fifth generation effort and then later US

00:17:15 government agency funding and so on, we were able to get enough funding, not for a couple person

00:17:22 centuries of time, but for a couple person millennia of time, which is what we’ve spent

00:17:27 since 1984, getting Psych to contain the tens of millions of rules that it needs in order to really

00:17:34 capture and span not all of human knowledge, but the things that you assume other people know,

00:17:42 the things you count on other people knowing. And so by now we’ve done that. And the good news is

00:17:50 since you’ve waited 38 years just about to talk to me, we’re about at the end of that process.

00:17:59 So most of what we’re doing now is not putting in even what you would consider common sense,

00:18:03 but more putting in domain specific application specific knowledge about health care in a certain

00:18:13 hospital or about oil pipes getting clogged up or whatever the applications happen to be. So

00:18:22 we’ve almost come full circle and we’re doing things very much like the expert systems of the

00:18:27 1970s and the 1980s, except instead of resting on nothing and being brittle, they’re now resting on

00:18:33 this massive pyramid, if you will, this massive lattice of common sense knowledge so that when

00:18:40 things go wrong, when something unexpected happens, they can fall back on more and more and more

00:18:45 general principles, eventually bottoming out in things like, for instance, if we have a problem

00:18:51 with the microphone, one of the things you’ll do is unplug it, plug it in again and hope for the

00:18:57 best, right? Because that’s one of the general pieces of knowledge you have in dealing with

00:19:01 electronic equipment or software systems or things like that. Is there a basic principle

00:19:06 like that? Is it possible to encode something that generally captures this idea of turn it off and

00:19:13 turn it back on and see if it fixes? Oh, absolutely. That’s one of the things that Psych knows.

00:19:19 That’s actually one of the fundamental laws of nature, I believe.

00:19:25 I wouldn’t call it a law. It’s more like a…

00:19:29 It seems to work every time. So it sure looks like a law. I don’t know.

00:19:34 So that basically covered the resources needed. And then we had to devise a method to actually

00:19:41 figure out, well, what are the tens of millions of things that we need to tell the system?

00:19:47 And for that, we found a few techniques which worked really well. One is to take any piece

00:19:54 of text almost, it could be an advertisement, it could be a transcript, it could be a novel,

00:19:59 it could be an article. And don’t pay attention to the actual type that’s there, the black space

00:20:07 on the white page. Pay attention to the complement of that, the white space, if you will. So what did

00:20:12 the writer of this sentence assume that the reader already knew about the world? For instance,

00:20:18 if they used a pronoun, why did they think that you would be able to understand what the intended

00:20:26 referent of that pronoun was? If they used an ambiguous word, how did they think that you

00:20:31 would be able to figure out what they meant by that word? The other thing we look at is the gap

00:20:38 between one sentence and the next one. What are all the things that the writer expected you to

00:20:43 fill in and infer occurred between the end of one sentence and the beginning of the other?

00:20:47 So if the sentence says, Fred Smith robbed the Third National Bank, period, he was sentenced to

00:20:56 20 years in prison, period. Well, between the first sentence and the second, you’re expected

00:21:02 to infer things like Fred got caught, Fred got arrested, Fred went to jail, Fred had a trial,

00:21:09 Fred was found guilty, and so on. If my next sentence starts out with something like,

00:21:14 the judge, dot, dot, dot, then you assume it’s the judge at his trial. If my next sentence starts out

00:21:19 something like, the arresting officer, dot, dot, dot, you assume that it was the police officer

00:21:24 who arrested him after he committed the crime and so on. So those are two techniques for getting

00:21:31 that knowledge. The other thing we sometimes look at is fake news or humorous onion headlines or

00:21:41 headlines in the Weekly World News, if you know what that is, or the National Enquirer, where it’s

00:21:46 like, oh, we don’t believe this, then we introspect on why don’t we believe it. So there are things

00:21:51 like, B17 lands on the moon. It’s like, what do we know about the world that causes us to believe

00:21:59 that that’s just silly or something like that? Or another thing we look for are contradictions,

00:22:05 contradictions, things which can’t both be true. And we say, what is it that we know that causes

00:22:13 us to know that both of these can’t be true at the same time? For instance, in one of the Weekly

00:22:19 World News editions, in one article, it talked about how Elvis was cited, even though he was

00:22:27 getting on in years and so on. And another article in the same one talked about people seeing Elvis’s

00:22:32 ghost. So it’s like, why do we believe that at least one of these articles must be wrong and so

00:22:39 on? So we have a series of techniques like that that enable our people. And by now, we have about

00:22:46 50 people working full time on this and have for decades. So we’ve put in the thousands of person

00:22:52 years of effort. We’ve built up these tens of millions of rules. We constantly police the system

00:22:59 to make sure that we’re saying things as generally as we possibly can. So you don’t want to say things

00:23:07 like, no mouse is also a moose. Because if you said things like that, then you’d have to add

00:23:14 another one or two or three zeros onto the number of assertions you’d actually have to have. So

00:23:21 at some point, we generalize things more and more and we get to a point where we say, oh,

00:23:25 yeah, for any two biological taxons, if we don’t know explicitly that one is a generalization of

00:23:31 another, then almost certainly they’re disjoint. A member of one is not going to be a member of the

00:23:37 other and so on. And the same thing with the Elvis and the ghost. It has nothing to do with Elvis.

00:23:41 It’s more about human nature and the mortality and that kind of stuff. In general, things are

00:23:48 not both alive and dead at the same time. Yeah. Unless special cats in theoretical physics examples.

00:23:55 Well, that raises a couple important points. Well, that’s the onion headline situation type of

00:24:00 thing. Okay, sorry. But no, no. So what you bring up is this really important point of like, well,

00:24:04 how do you handle exceptions and inconsistencies and so on? And one of the hardest lessons for us

00:24:12 to learn, it took us about five years to really grit our teeth and learn to love it, is we had to

00:24:21 give up global consistency. So the knowledge base can no longer be consistent. So this is a kind of

00:24:27 scary thought. I grew up watching Star Trek and anytime a computer was inconsistent, it would

00:24:33 either freeze up or explode or take over the world or something bad would happen. Or if you come from

00:24:39 a mathematics background, once you can prove false, you can prove anything. So that’s not good.

00:24:45 And so on. So that’s why the old knowledge based systems were all very, very consistent.

00:24:52 But the trouble is that by and large, our models of the world, the way we talk about the world and

00:24:58 so on, there are all sorts of inconsistencies that creep in here and there that will sort of

00:25:04 kill some attempt to build some enormous globally consistent knowledge base. And so what we had to

00:25:10 move to was a system of local consistency. So a good analogy is you know that the surface of the

00:25:17 earth is more or less spherical globally, but you live your life every day as though the surface of

00:25:26 the earth were flat. When you’re talking to someone in Australia, you don’t think of them

00:25:30 as being oriented upside down to you. When you’re planning a trip, even if it’s a thousand miles

00:25:35 away, you may think a little bit about time zones, but you rarely think about the curvature of the

00:25:40 earth and so on. And for most purposes, you can live your whole life without really worrying about

00:25:46 that because the earth is locally flat. In much the same way, the psych knowledge base is divided

00:25:53 up into almost like tectonic plates, which are individual contexts. And each context is more

00:25:59 or less consistent, but there can be small inconsistencies at the boundary between one

00:26:05 context and the next one and so on. And so by the time you move say 20 contexts over,

00:26:12 there could be glaring inconsistencies. So eventually you get from the normal modern

00:26:17 real world context that we’re in right now to something like roadrunner cartoon context where

00:26:24 physics is very different. And in fact, life and death are very different because no matter how

00:26:29 many times he’s killed, the coyote comes back in the next scene and so on. So that was a hard

00:26:37 lesson to learn. And we had to make sure that our representation language, the way that we actually

00:26:43 encode the knowledge and represent it, was expressive enough that we could talk about

00:26:47 things being true in one context and false in another, things that are true at one time and

00:26:52 false in another, things that are true, let’s say, in one region, like one country, but false

00:26:57 in another, things that are true in one person’s belief system, but false in another person’s

00:27:03 belief system, things that are true at one level of abstraction and false at another.

00:27:08 For instance, at one level of abstraction, you think of this table as a solid object,

00:27:12 but down at the atomic level, it’s mostly empty space and so on.

00:27:16 So then that’s fascinating, but it puts a lot of pressure on context to do a lot of work.

00:27:23 So you say tectonic plates, is it possible to formulate contexts that are general and big

00:27:29 that do this kind of capture of knowledge bases? Or do you then get turtles on top of turtles,

00:27:36 again, where there’s just a huge number of contexts?

00:27:39 So it’s good you asked that question because you’re pointed in the right direction, which is

00:27:44 you want context to be first class objects in your system’s knowledge base, in particular,

00:27:50 in psych’s knowledge base. And by first class object, I mean that we should be able to have

00:27:56 psych think about and talk about and reason about one context or another context the same way it

00:28:02 reasons about coffee cups and tables and people and fishing and so on. And so contexts are just

00:28:11 terms in its language, just like the ones I mentioned. And so psych can reason about context,

00:28:17 context can arrange hierarchically and so on. And so you can say things about, let’s say,

00:28:25 things that are true in the modern era, things that are true in a particular year would then be

00:28:32 a subcontext of the things that are true in a broader, let’s say, a century or a millennium

00:28:39 or something like that. Things that are true in Austin, Texas are generally going to be a

00:28:44 specialization of things that are true in Texas, which is going to be a specialization of things

00:28:50 that are true in the United States and so on. And so you don’t have to say things over and over

00:28:56 again at all these levels. You just say things at the most general level that it applies to,

00:29:02 and you only have to say it once, and then it essentially inherits to all these more specific

00:29:07 contexts. To ask a slightly technical question, is this inheritance a tree or a graph?

00:29:15 Oh, you definitely have to think of it as a graph. So we could talk about, for instance,

00:29:20 why the Japanese fifth generation computing effort failed. There were about half a dozen

00:29:25 different reasons. One of the reasons they failed was because they tried to represent

00:29:30 knowledge as a tree rather than as a graph. And so each node in their representation

00:29:39 could only have one parent node. So if you had a table that was a wooden object, a black object,

00:29:46 a flat object, and so on, you have to choose one, and that’s the only parent it could have.

00:29:52 When, of course, depending on what it is you need to reason about it, sometimes it’s important

00:29:57 to know that it’s made out of wood, like if we’re talking about a fire. Sometimes it’s important to

00:30:02 know that it’s flat if we’re talking about resting something on it, and so on. So one of the problems

00:30:09 was that they wanted a kind of Dewey decimal numbering system for all of their concepts,

00:30:15 which meant that each node could only have at most 10 children, and each node could only have

00:30:21 one parent. And while that does enable the Dewey decimal type numbering of concepts, labeling of

00:30:30 concepts, it prevents you from representing all the things you need to about objects in our world.

00:30:37 And that was one of the things which they never were able to overcome, and I think that was one

00:30:42 of the main reasons that that project failed. So we’ll return to some of the doors you’ve

00:30:47 opened, but if we can go back to that room in 1984 around there with Marvin Minsky and Stanford.

00:30:53 By the way, I should mention that Marvin wouldn’t do his estimate until someone brought him an

00:30:59 envelope so that he could literally do a back of the envelope calculation to come up with his number.

00:31:07 Well, because I feel like the conversation in that room is an important one. You know,

00:31:13 this is how sometimes science is done in this way. A few people get together

00:31:19 and plant the seed of ideas, and they reverberate throughout history.

00:31:23 And some kind of dissipate and disappear, and some, you know, Drake equation, and, you know,

00:31:29 they, you know, seems like a meaningless equation, somewhat meaningless, but I think it drives and

00:31:33 motivates a lot of scientists. And when the aliens finally show up, that equation will get even more

00:31:39 valuable because then we’ll get, be able to, in the long arc of history, the Drake equation

00:31:45 will prove to be quite useful, I think. And in that same way, a conversation of just how many facts

00:31:53 are required to capture the basic common sense knowledge of the world. That’s a fascinating

00:31:57 question. I want to distinguish between what you think of as facts and the kind of things that we

00:32:02 represent. So we map to and essentially make sure that psych has the ability to, as it were, read

00:32:10 and access the kind of facts you might find, say, in Wikidata or stated in a Wikipedia article or

00:32:18 something like that. So what we’re representing, the things that we need a small number of tens

00:32:23 of millions of, are more like rules of thumb, rules of good guessing, things which are usually

00:32:29 true and which help you to make sense of the facts that are sort of sitting off in some database or

00:32:37 some other more static storage. So they’re almost like platonic forms. So like when you read stuff

00:32:43 on Wikipedia, that’s going to be like projections of those ideas. You read an article about the fact

00:32:48 that Elvis died, that’s a projection of the idea that humans are mortal. And very few

00:32:56 Wikipedia articles will write, humans are mortal. Exactly. That’s what I meant about

00:33:01 ferreting out the unstated things in text. What are all the things that were assumed? And so those

00:33:07 are things like if you have a problem with something, turning it off and on often fixes

00:33:13 it for reasons we don’t really understand and we’re not happy about. Or people can’t be both

00:33:18 alive and dead at the same time. Or water flows downhill. If you search online for water flowing

00:33:25 uphill and water flowing downhill, you’ll find more references for water flowing uphill because

00:33:29 it’s used as a kind of a metaphorical reference for some unlikely thing because of course,

00:33:36 everyone already knows that water flows downhill. So why would anyone bother saying that?

00:33:41 Do you have a word you prefer? Because we said facts isn’t the right word. Is there a word like

00:33:46 concepts? I would say assertions. Assertions or rules? Because I’m not talking about rigid rules,

00:33:53 but rules of thumb. But assertions is a nice one that covers all of these things.

00:33:59 Yeah. As a programmer, to me, assert has a very dogmatic authoritarian feel to them.

00:34:06 Oh, I’m sorry.

00:34:08 I’m so sorry. Okay. But assertions works. Okay. So if we go back to that room with

00:34:13 Marvin Minsky with you, all these seminal figures, Ed Fagamon, thinking about this very

00:34:22 philosophical, but also engineering question. We can also go back a couple of decades before then

00:34:29 and thinking about artificial intelligence broadly when people were thinking about,

00:34:34 you know, how do you create super intelligent systems, general intelligence. And I think

00:34:40 people’s intuition was off at the time. And I mean, this continues to be the case that we’re not,

00:34:48 when we’re grappling with these exceptionally difficult ideas, we’re not always, it’s very

00:34:53 difficult to truly understand ourselves when we’re thinking about the human mind to introspect how

00:35:00 difficult it is to engineer intelligence, to solve intelligence. We’re not very good at estimating

00:35:05 that. And you are somebody who has really stayed with this question for decades.

00:35:11 What’s your sense from the 1984 to today? Have you gotten a stronger sense of just how much

00:35:22 knowledge is required? You’ve kind of said with some level of certainty that it’s still on the

00:35:27 order of magnitude of tens of millions. Right. For the first several years, I would have said that

00:35:32 it was on the order of one or two million. And so it took us about five or six years to realize

00:35:40 that we were off by a factor of 10. But I guess what I’m asking, you know, Marvin Misk is very

00:35:47 confident in the 60s. Yes. Right. What’s your sense if you, you know, 200 years from now,

00:35:59 you’re still, you know, you’re not going to be any longer in this particular biological body,

00:36:05 but your brain will still be in the digital form and you’ll be looking back. Would you think you

00:36:11 were smart today? Like your intuition was right? Or do you think you may be really off?

00:36:19 So I think I’m right enough. And let me explain what I mean by that, which is sometimes like if

00:36:27 you have an old fashioned pump, you have to prime the pump and then eventually it starts. So I think

00:36:34 I’m right enough in the sense that what we’ve built, even if it isn’t, so to speak, everything

00:36:41 you need, it’s primed the knowledge pump enough that psych can now itself help to learn more and

00:36:51 more automatically on its own by reading things and understanding and occasionally asking questions

00:36:56 like a student would or something and by doing experiments and discovering things on its own

00:37:02 and so on. So through a combination of psych powered discovery and psych powered reading,

00:37:09 it will be able to bootstrap itself. Maybe it’s the final 2%, maybe it’s the final 99%.

00:37:16 So even if I’m wrong, all I really need to build is a system which has primed the pump enough

00:37:24 that it can begin that cascade upward, that self reinforcing sort of quadratically,

00:37:31 or maybe even exponentially increasing path upward that we get from, for instance, talking with each

00:37:39 other. That’s why humans today know so much more than humans 100,000 years ago. We’re not really

00:37:45 that much smarter than people were 100,000 years ago, but there’s so much more knowledge and we

00:37:50 have language and we can communicate, we can check things on Google and so on. So effectively,

00:37:56 we have this enormous power at our fingertips and there’s almost no limit to how much you could

00:38:02 learn if you wanted to because you’ve already gotten to a certain level of understanding of

00:38:07 the world that enables you to read all these articles and understand them, that enables you

00:38:12 to go out and if necessary, do experiments although that’s slower as a way of gathering data

00:38:18 and so on. And I think this is really an important point, which is if we have artificial

00:38:24 intelligence, real general artificial intelligence, human level artificial intelligence,

00:38:29 then people will become smarter. It’s not so much that it’ll be us versus the AIs, it’s more like

00:38:37 us and the AIs together. We’ll be able to do things that require more creativity, that would

00:38:43 take too long right now, but we’ll be able to do lots of things in parallel. We’ll be able to

00:38:48 misunderstand each other less. There’s all sorts of value that effectively for an individual would

00:38:56 mean that individual will for all intents and purposes be smarter and that means that humanity

00:39:02 as a species will be smarter. And when was the last time that any invention qualitatively

00:39:10 made a huge difference in human intelligence? You have to go back a long ways. It wasn’t like the

00:39:16 internet or the computer or mathematics or something. It was all the way back to the

00:39:22 development of language. We sort of look back on prelinguistic cavemen as well.

00:39:29 They weren’t really intelligent, were they? They weren’t really human, were they? And I think that

00:39:36 as you said, 50, 100, 200 years from now, people will look back on people today

00:39:42 right before the advent of these sort of lifelong general AI uses and say,

00:39:51 you know, those poor people, they weren’t really human, were they?

00:39:55 Mm hmm. Exactly. So you said a lot of really interesting things. By the way, I would maybe

00:40:00 try to argue that the internet is on the order of the kind of big leap in improvement that the

00:40:12 invention of language was. Well, it’s certainly a big leap in one direction. We’re not sure whether

00:40:17 it’s upward or downward. Well, I mean very specific parts of the internet, which is access to information

00:40:22 like a website like Wikipedia, like ability for human beings from across the world to access

00:40:28 information very quickly. So I could take either side of this argument. And since you just took

00:40:33 one side, I’ll give you the other side, which is that almost nothing has done more harm than

00:40:40 something like the internet and access to that information in two ways. One is it’s made people

00:40:47 more globally ignorant in the same way that calculators made us more or less innumerate.

00:40:56 So when I was growing up, we had to use slide rules. We had to be able to estimate and so on.

00:41:02 Today, people don’t really understand numbers. They don’t really understand math. They don’t

00:41:08 really estimate very well at all and so on. They don’t really understand the difference

00:41:13 between trillions and billions and millions and so on very well because calculators do that all

00:41:20 for us. And thanks to things like the internet and search engines, that same kind of juvenileism

00:41:30 is reinforced in making people essentially be able to live their whole lives, not just without

00:41:35 being able to do arithmetic and estimate, but now without actually having to really know almost

00:41:40 anything because anytime they need to know something, they’ll just go and look it up.

00:41:44 You’re right. And I could tell you could play both sides of this and it is a double edged sword.

00:41:48 You can, of course, say the same thing about language. Probably people when they invented

00:41:52 language, they would criticize. It used to be if we’re angry, we would just kill a person. And if

00:41:58 we’re in love, we would just have sex with them. And now everybody’s writing poetry and bullshit.

00:42:04 You should just be direct. You should have physical contact. Enough of this words and books.

00:42:11 You’re not actually experiencing. If you read a book, you’re not experiencing the thing. This

00:42:15 is nonsense. That’s right. If you read a book about how to make butter, that’s not the same

00:42:19 as if you had to learn it and do it yourself and so on. So let’s just say that something is gained,

00:42:24 but something is lost every time you have these sorts of dependencies on technology.

00:42:33 And overall, I think that having smarter individuals and having smarter AI augmented

00:42:41 human species will be one of the few ways that we’ll actually be able to overcome some of the

00:42:47 global problems we have involving poverty and starvation and global warming and overcrowding,

00:42:54 all the other problems that are besetting the planet. We really need to be smarter.

00:43:01 And there are really only two routes to being smarter. One is through biochemistry and genetics.

00:43:09 Genetic engineering. The other route is through having general AIs that augment our intelligence.

00:43:17 And hopefully one of those two ways of paths to salvation will come through before it’s too late.

00:43:27 Yeah, so I agree with you. And obviously, as an engineer, I have a better sense and an optimism

00:43:35 about the technology side of things because you can control things there more. Biology is just

00:43:39 such a giant mess. We’re living through a pandemic now. There’s so many ways that nature can just be

00:43:45 just destructive and destructive in a way where it doesn’t even notice you. It’s not like a battle

00:43:51 of humans versus virus. It’s just like, huh, okay. And then you can just wipe out an entire species.

00:43:57 The other problem with the internet is that it has enabled us to surround ourselves with an

00:44:07 echo chamber, with a bubble of like minded people, which means that you can have truly bizarre

00:44:16 theories, conspiracy theories, fake news, and so on, promulgate and surround yourself with people

00:44:23 who essentially reinforce what you want to believe or what you already believe about the world.

00:44:30 And in the old days, that was much harder to do when you had, say, only three TV networks,

00:44:37 or even before when you had no TV networks and you had to actually look at the world and make your

00:44:42 own reasoned decisions. I like the push and pull of our dance that we’re doing because then I’ll

00:44:47 just say in the old world, having come from the Soviet Union, because you had one or a couple of

00:44:52 networks, then propaganda could be much more effective. And then the government can overpower

00:44:56 its people by telling you the truth and then starving millions and torturing millions and

00:45:03 putting millions into camps and starting wars with a propaganda machine, allowing you to believe

00:45:09 that you’re actually doing good in the world. With the internet, because of all the quote unquote

00:45:14 conspiracy theories, some of them are actually challenging the power centers, the very kind of

00:45:19 power centers that a century ago would have led to the death of millions. So there’s, again, this

00:45:26 double edged sword. And I very much agree with you on the AI side. It’s often an intuition that

00:45:32 people have that somehow AI will be used to maybe overpower people by certain select groups. And to

00:45:40 me, it’s not at all obvious that that’s the likely scenario. To me, the likely scenario, especially

00:45:46 just having observed the trajectory of technology, is it’ll be used to empower people. It’ll be used

00:45:51 to extend the capabilities of individuals across the world, because there’s a lot of money to be

00:45:59 made that way. Improving people’s lives, you can make a lot of money. I agree. I think that the

00:46:05 main thing that AI prostheses, AI amplifiers will do for people is make it easier, maybe even

00:46:15 unavoidable, for them to do good critical thinking. So pointing out logical fallacies,

00:46:22 logical contradictions and so on, in things that they otherwise would just blithely believe,

00:46:31 pointing out essentially data which they should take into consideration if they really want to

00:46:39 learn the truth about something and so on. So I think doing not just educating in the sense of

00:46:47 pouring facts into people’s heads, but educating in the sense of arming people with the ability to do

00:46:53 good critical thinking is enormously powerful. The education system that we have in the US and

00:47:01 worldwide generally don’t do a good job of that. But I believe that the AI…

00:47:08 The AIs will. The AIs will, the AIs can and will. In the same way that everyone can have their own

00:47:15 Alexa or Siri or Google Assistant or whatever, everyone will have this sort of cradle to grave

00:47:24 assistant which will get to know you, which you’ll get to trust, it’ll model you, you’ll model it,

00:47:30 and it’ll call to your attention things which will in some sense make your life better, easier,

00:47:37 less mistake ridden and so on, less regret ridden if you listen to it.

00:47:45 Yeah, I’m in full agreement with you about this space of technologies and I think it’s super

00:47:51 exciting. And from my perspective, integrating emotional intelligence, so even things like

00:47:57 friendship and companionship and love into those kinds of systems, as opposed to helping you just

00:48:04 grow intellectually as a human being, allow you to grow emotionally, which is ultimately what makes

00:48:09 life amazing, is to sort of, you know, the old pursuit of happiness. So it’s not just the pursuit

00:48:16 of reason, it’s the pursuit of happiness too. The full spectrum. Well, let me sort of, because you

00:48:22 mentioned so many fascinating things, let me jump back to the idea of automated reasoning. So the

00:48:30 the acquisition of new knowledge has been done in this very interesting way, but primarily by humans

00:48:37 doing this. Just you can think of monks in their cells in medieval Europe, you know, carefully

00:48:45 illuminating manuscripts and so on. It’s a very difficult and amazing process actually because

00:48:51 it allows you to truly ask the question about the in the white space, what is assumed. I think this

00:48:58 exercise is like very few people do this, right? They just do it subconsciously. They perform this.

00:49:07 By definition, right? Because those pieces of elided, of omitted information, of those missing

00:49:14 steps, as it were, are pieces of common sense. If you actually included all of them, it would

00:49:21 almost be offensive or confusing to the reader. It’s like, why are they telling me all these? Of

00:49:26 course I know all these things. And so it’s one of these things which almost by its very nature

00:49:35 has almost never been explicitly written down anywhere because by the time you’re old enough

00:49:42 to talk to other people and so on, if you survived to that age, presumably you already got pieces of

00:49:49 common sense. Like if something causes you pain whenever you do it, probably not a good idea to

00:49:55 keep doing it. So what ideas do you have, given how difficult this step is, what ideas are there

00:50:04 for how to do it automatically without using humans or at least not doing like a large

00:50:12 percentage of the work for humans and then humans only do the very high level supervisory work?

00:50:18 So we have, in fact, two directions we’re pushing on very, very heavily currently at PsychCorp. And

00:50:25 one involves natural language understanding and the ability to read what people have explicitly

00:50:30 written down and to pull knowledge in that way. But the other is to build a series of knowledge

00:50:40 editing tools, knowledge entry tools, knowledge capture tools, knowledge testing tools and so on.

00:50:49 Think of them as like user interface suite of software tools if you want, something that will

00:50:55 help people to more or less automatically expand and extend the system in areas where, for instance,

00:51:03 they want to build some app, have it do some application or something like that. So I’ll give

00:51:08 you an example of one, which is something called abduction. So you’ve probably heard of like

00:51:14 deduction and induction and so on. But abduction is unlike those, abduction is not sound, it’s just

00:51:25 useful. So for instance, deductively, if someone is out in the rain and they’re going to get all

00:51:33 wet and when they enter a room, they might be all wet and so on. So that’s deduction. But if someone

00:51:42 were to walk into the room right now and they were dripping wet, we would immediately look

00:51:47 outside to say, oh, did it start to rain or something like that. Now, why did we say maybe

00:51:54 it started to rain? That’s not a sound logical inference, but it’s certainly a reasonable

00:51:59 abductive leap to say, well, one of the most common ways that a person would have gotten

00:52:06 dripping wet is if they had gotten caught out in the rain or something like that. So what does that

00:52:14 have to do with what we were talking about? So suppose you’re building one of these applications

00:52:18 and the system gets some answer wrong and you say, oh, yeah, the answer to this question is

00:52:24 this one, not the one you came up with. Then what the system can do is it can use everything it

00:52:30 already knows about common sense, general knowledge, the domain you’ve already been

00:52:34 telling it about, and context like we talked about and so on and say, well, here are seven

00:52:41 alternatives, each of which I believe is plausible, given everything I already know. And if any of

00:52:48 these seven things were true, I would have come up with the answer you just gave me instead of the

00:52:53 wrong answer I came up with. Is one of these seven things true? And then you, the expert, will look

00:52:59 at those seven things and say, oh, yeah, number five is actually true. And so without actually

00:53:04 having to tinker down at the level of logical assertions and so on, you’ll be able to educate

00:53:11 the system in the same way that you would help educate another person who you were trying to

00:53:16 apprentice or something like that. So that significantly reduces the mental effort

00:53:22 or significantly increases the efficiency of the teacher, the human teacher. Exactly. And it makes

00:53:28 more or less anyone able to be a teacher in that way. So that’s part of the answer. And then the

00:53:36 other is that the system on its own will be able to, through reading, through conversations with

00:53:44 other people and so on, learn the same way that you or I or other humans do. First of all, that’s

00:53:52 a beautiful vision. I’ll have to ask you about Semantic Web in a second here. But first,

00:53:57 are there, when we talk about specific techniques, do you find something inspiring or directly useful

00:54:04 from the whole space of machine learning, deep learning, these kinds of spaces of techniques that

00:54:08 have been shown effective for certain kinds of problems in the recent, now, decade and a half?

00:54:15 I think of the machine learning work as more or less what our right brain has been able to do.

00:54:23 I think of the machine learning work as more or less what our right brain hemispheres do. So

00:54:30 being able to take a bunch of data and recognize patterns, being able to statistically infer

00:54:39 things and so on. And I certainly wouldn’t want to not have a right brain hemisphere,

00:54:47 but I’m also glad that I have a left brain hemisphere as well, something that can

00:54:51 metaphorically sit back and puff on its pipe and think about this thing over here. It’s like,

00:54:57 why might this have been true? And what are the implications of it? How should I feel about that

00:55:03 and why and so on? So thinking more deeply and slowly, what Kahneman called thinking slowly

00:55:11 versus thinking quickly, whereas you want machine learning to think quickly, but you want the

00:55:16 ability to think deeply, even if it’s a little slower. So I’ll give you an example of a project

00:55:22 we did recently with NIH involving the Cleveland Clinic and a couple other institutions that we ran

00:55:30 a project for. And what it did was it took GWAS’s genome wide association studies.

00:55:37 Those are big databases of patients that came into a hospital. They got their DNA sequenced

00:55:46 because the cost of doing that has gone from infinity to billions of dollars to $100 or so.

00:55:54 And so now patients routinely get their DNA sequenced. So you have these big databases

00:55:59 of the SNPs, the single nucleotide polymorphisms, the point mutations in a patient’s DNA,

00:56:06 and the disease that happened to bring them into the hospital. So now you can do correlation

00:56:11 studies, machine learning studies of which mutations are associated with and led to which

00:56:20 physiological problems and diseases and so on, like getting arthritis and so on. And the problem

00:56:27 is that those correlations turn out to be very spurious. They turn out to be very noisy. Very

00:56:34 many of them have led doctors onto wild goose chases and so on. And so they wanted a way of

00:56:40 eliminating or the bad ones are focusing on the good ones. And so this is where psych comes in,

00:56:46 which is psych takes those sort of A to Z correlations between point mutations and

00:56:53 the medical condition that needs treatment. And we say, okay, let’s use all this public knowledge

00:57:00 and common sense knowledge about what reactions occur where in the human body,

00:57:06 what polymerizes what, what catalyzes what reactions and so on. And let’s try to put together

00:57:12 a 10 or 20 or 30 step causal explanation of why that mutation might have caused

00:57:20 that medical condition. And so psych would put together in some sense, some Rube Goldberg like

00:57:25 a chain that would say, oh yeah, that mutation if it got expressed would be this altered protein,

00:57:35 which because of that, if it got to this part of the body would catalyze this reaction. And by the

00:57:40 way, that would cause more bioactive vitamin D in the person’s blood. And anyway, 10 steps later,

00:57:46 that screws up bone resorption and that’s why this person got osteoporosis early in life and so on.

00:57:52 So that’s human interpretable or at least docs are human interpretable.

00:57:55 Exactly. And the important thing even more than that is you shouldn’t really trust that 20 step

00:58:05 Rube Goldberg chain any more than you trust that initial A to Z correlation except two things. One,

00:58:12 if you can’t even think of one causal chain to explain this, then that correlation probably was

00:58:19 just noise to begin with. And secondly, and even more powerfully, along the way that causal chain

00:58:27 will make predictions like the one about having more bioactive vitamin D in your blood. So you

00:58:32 can now go back to the data about these patients and say, by the way, did they have slightly

00:58:38 elevated levels of bioactive vitamin D in their blood and so on? And if the answer is no, that

00:58:44 strongly disconfirms your whole causal chain. And if the answer is yes, that somewhat confirms

00:58:50 that causal chain. And so using that, we were able to take these correlations from this GWAS

00:58:57 database and we were able to essentially focus the researchers attention on the very small

00:59:06 percentage of correlations that had some explanation and even better some explanation

00:59:12 that also made some independent prediction that they could confirm or disconfirm by looking at

00:59:17 the data. So think of it like this kind of synergy where you want the right brain machine learning

00:59:23 to quickly come up with possible answers. You want the left brain psych like AI to think about that

00:59:31 and think about why that might have been the case and what else would be the case if that were true

00:59:36 and so on, and then suggest things back to the right brain to quickly check out again. So it’s

00:59:43 that kind of synergy back and forth, which I think is really what’s going to lead to general AI, not

00:59:50 narrow, brittle machine learning systems and not just something like psych.

00:59:55 Okay. So that’s a brilliant synergy. But I was also thinking in terms of the automated expansion

01:00:00 of the knowledge base, you mentioned NLU. This is very early days in the machine learning space

01:00:07 of this, but self supervised learning methods, you know, you have these language models, GPT3

01:00:13 and so on, that just read the internet and they form representations that can then be mapped to

01:00:19 something useful. The question is, what is the useful thing? Like they’re now playing with a

01:00:25 pretty cool thing called OpenAI Codex, which is generating programs from documentation. Okay,

01:00:30 that’s kind of useful. It’s cool. But my question is, can it be used to generate

01:00:37 in part, maybe with some human supervision, psych like assertions, help feed psych more assertions

01:00:45 from this giant body of internet data? Yes, that is in fact, one of our goals is

01:00:51 how can we harness machine learning? How can we harness natural language processing

01:00:56 to increasingly automate the knowledge acquisition process, the growth of psych? And that’s what I

01:01:02 meant by priming the pump that, you know, if you sort of learn things at the fringe of what you

01:01:09 know already, you learn this new thing is similar to what you know already, and here are the

01:01:14 differences and the new things you had to learn about it and so on. So the more you know, the more

01:01:19 and more easily you can learn new things. But unfortunately, inversely, if you don’t really

01:01:24 know anything, it’s really hard to learn anything. And so if you’re not careful, if you start out with

01:01:31 too small sort of a core to start this process, it never really takes off. And so that’s why I

01:01:39 view this as a pump priming exercise to get a big enough manually produced, even though that’s kind

01:01:44 of ugly duckling technique, put in the elbow grease to produce a large enough core that you

01:01:51 will be able to do all the kinds of things you’re imagining without sort of ending up with the kind

01:01:58 of wacky brittlenesses that we see, for example, in GPT3, where you’ll tell it a story about

01:02:09 someone plotting to poison someone and so on. And then GPT3 says, you say, what’s the very next

01:02:23 sentence? And the next sentence is, oh yeah, that person then drank the poison they just put together.

01:02:27 It’s like, that’s probably not what happened. Or if you go to Siri and I think I have, where can

01:02:36 I go for help with my alcohol problem or something, it’ll come back and say, I found seven liquor

01:02:43 stores near you and so on. So it’s one of these things where, yes, it may be helpful most of the

01:02:52 time. It may even be correct most of the time. But if it doesn’t really understand what it’s saying,

01:02:59 and if it doesn’t really understand why things are true and doesn’t really understand how the

01:03:03 world works, then some fraction of the time it’s going to be wrong. Now, if your only goal is to

01:03:09 sort of find relevant information like search engines do, then being right 90% of the time is

01:03:16 fantastic. That’s unbelievably great. However, if your goal is to save the life of your child who

01:03:24 has some medical problem or your goal is to be able to drive for the next 10,000 hours of driving

01:03:31 without getting into a fatal accident and so on, then error rates down at the 10% level or even

01:03:39 the 1% level are not really acceptable. I like the model of where that learning happens at the edge

01:03:46 and then you kind of think of knowledge as this sphere. So you want a large sphere because the

01:03:54 learning is happening on the surface. Exactly. So what you can learn next

01:04:00 increases quadratically as the diameter of that sphere goes up.

01:04:05 It’s nice because you think when you know nothing, it’s like you can learn anything,

01:04:09 but the reality, not really. Right. If you know nothing, you can really learn nothing.

01:04:15 You can appear to learn. One of the anecdotes I could go back and give you about why I feel so

01:04:25 strongly about this personally was in 1980, 1981, my daughter Nicole was born and she’s actually

01:04:36 doing fine now. But when she was a baby, she was diagnosed as having meningitis and doctors wanted

01:04:43 to do all these scary things. And my wife and I were very worried and we could not get a meaningful

01:04:52 answer from her doctors about exactly why they believed this, what the alternatives were,

01:04:58 and so on. And fortunately, a friend of mine, Ted Shortliffe, was another assistant professor

01:05:05 in computer science at Stanford at the time. And he’d been building a program called MISON,

01:05:11 which was a medical diagnosis program that happened to specialize in blood infections

01:05:18 like meningitis. And so, he had privileges at Stanford Hospital because he was also an MD.

01:05:23 And so, we got hold of her chart and we put in her case and it came up with exactly the same

01:05:29 diagnoses and exactly the same therapy recommendations. But the difference was,

01:05:34 because it was a knowledge based system, a rule based system, it was able to tell us

01:05:39 step by step by step why this was the diagnosis and step by step why this was the best therapy

01:05:49 and the best procedure to do for her and so on. And there was a real epiphany because that made

01:05:56 all the difference in the world. Instead of blindly having to trust an authority,

01:06:01 we were able to understand what was actually going on. And so, at that time, I realized that

01:06:08 that really is what was missing in computer programs was that even if they got things right,

01:06:13 because they didn’t really understand the way the world works and why things are the way they are,

01:06:20 they weren’t able to give explanations of their answer. And it’s one thing to use a machine

01:06:27 learning system that says, I think you should get this operation and you say why and it says

01:06:33 0.83 and you say no, in more detail why and it says 0.831. That’s not really very compelling

01:06:40 and that’s not really very helpful. There’s this idea of the Semantic Web that when I first heard

01:06:47 about, I just fell in love with the idea. It was the obvious next step for the internet.

01:06:51 Sure. And maybe you can speak about what is the Semantic Web? What are your thoughts about it? How

01:06:58 your vision and mission and goals with Psych are connected, integrated? Are they dance partners? Are

01:07:05 they aligned? What are your thoughts there? So, think of the Semantic Web as a kind of

01:07:10 knowledge graph and Google already has something they call knowledge graph, for example, which is

01:07:17 sort of like a node and link diagram. So, you have these nodes that represent concepts or words or

01:07:25 terms and then there are some arcs that connect them that might be labeled. And so, you might have

01:07:32 a node with like one person that represents one person and let’s say a husband link that then

01:07:44 points to that person’s husband. And so, there’d be then another link that went from that person

01:07:50 labeled wife that went back to the first node and so on. So, having this kind of representation is

01:07:59 really good if you want to represent binary relations, essentially relations between two

01:08:08 things. So, if you have equivalent of like three word sentences, like Fred’s wife is Wilma or

01:08:20 something like that, you can represent that very nicely using these kinds of graph structures or

01:08:27 using something like the Semantic Web and so on. But the problem is that very often what you want

01:08:37 to be able to express takes a lot more than three words and a lot more than simple graph structures

01:08:46 like that to represent. So, for instance, if you’ve read or seen Romeo and Juliet, I could

01:08:55 say to you something like, remember when Juliet drank the potion that put her into a kind of

01:09:01 suspended animation? When Juliet drank that potion, what did she think that Romeo would

01:09:08 think when he heard from someone that she was dead? And you could basically understand what

01:09:15 I’m saying. You could understand the question. You could probably remember the answer was,

01:09:19 well, she thought that this friar would have gotten the message to Romeo saying that she

01:09:26 was going to do this, but the friar didn’t. So, you’re able to represent and reason with these

01:09:33 much, much, much more complicated expressions that go way, way beyond what simple three,

01:09:41 as it were, three word or four word English sentences are, which is really what the Semantic

01:09:45 Web can represent and really what Knowledge Graphs can represent.

01:09:48 If you could step back for a second, because it’s funny you went into specifics and maybe

01:09:54 you can elaborate, but I was also referring to Semantic Web as the vision of converting

01:10:00 data on the internet into something that’s interpretable, understandable by machines.

01:10:06 Oh, of course, at that level.

01:10:09 So, I wish it’d say like, what is the Semantic Web? I mean, you could say a lot of things,

01:10:14 but it might not be obvious to a lot of people when they do a Google search that,

01:10:20 just like you said, while there might be something that’s called a Knowledge Graph,

01:10:24 it really boils down to keyword search ranked by the quality estimate of the website,

01:10:33 integrating previous human based Google searches and what they thought was useful.

01:10:40 It’s like some weird combination of surface level hacks that work exceptionally well,

01:10:48 but they don’t understand the full contents of the websites that they’re searching.

01:10:55 So, Google does not understand, to the degree we’ve been talking about the word understand,

01:11:01 the contents of the Wikipedia pages as part of the search process, and the Semantic Web says,

01:11:08 let’s try to come up with a way for the computer to be able to truly understand

01:11:13 the contents of those pages. That’s the dream.

01:11:16 Yes. So, let me first give you an anecdote, and then I’ll answer your question. So,

01:11:24 there’s a search engine you’ve probably never heard of called Northern Light,

01:11:27 and it went out of business, but the way it worked, it was a kind of vampiric search engine,

01:11:35 and what it did was it didn’t index the internet at all. All it did was it negotiated and got

01:11:43 access to data from the big search engine companies about what query was typed in,

01:11:51 and where the user ended up being happy, and actually then they type in a completely different

01:12:01 query, unrelated query and so on. So, it just went from query to the webpage that seemed to

01:12:08 satisfy them eventually, and that’s all. So, it had actual no understanding of what was being

01:12:16 typed in, it had no statistical data other than what I just mentioned, and it did a fantastic job.

01:12:21 It did such a good job that the big search engine company said, oh, we’re not going to sell you this

01:12:26 data anymore. So, then it went out of business because it had no other way of taking users to

01:12:31 where they would want to go and so on. And of course, the search engines are now using

01:12:36 that kind of idea. Yes. So, let’s go back to what you said about the Semantic Web. So,

01:12:41 the dream Tim Berners Lee and others dream about the Semantic Web at a general level is,

01:12:50 of course, exciting and powerful, and in a sense, the right dream to have, which is to replace the

01:13:00 kind of statistically mapped linkages on the internet into something that’s more meaningful

01:13:14 and semantic and actually gets at the understanding of the content and so on. And eventually, if you

01:13:23 say, well, how can we do that? There’s sort of a low road, which is what the knowledge graphs are

01:13:30 doing and so on, which is to say, well, if we just use the simple binary relations, we can actually

01:13:38 get some fraction of the way toward understanding and do something where in the land of the blind,

01:13:45 the one eyed man is king kind of thing. And so, being able to even just have a toe in the water

01:13:51 in the right direction is fantastically powerful. And so, that’s where a lot of people stop. But

01:13:58 then you could say, well, what if we really wanted to represent and reason with the full

01:14:04 meaning of what’s there? For instance, about Romeo and Juliet with the reasoning about what Juliet

01:14:12 believes that Romeo will believe that Juliet believed and so on. Or if you look at the news,

01:14:17 what President Biden believed that the leaders of the Taliban would believe about the leaders

01:14:24 of Afghanistan if they blah, blah, blah. So, in order to represent complicated sentences like

01:14:34 that, let alone reason with them, you need something which is logically much more expressive

01:14:42 than these simple triples, than these simple knowledge graph type structures and so on.

01:14:48 And that’s why kicking and screaming, we were led from something like the semantic web

01:14:55 representation, which is where we started in 1984 with frames and slots with those kinds of triples,

01:15:03 triple store representation. We were led kicking and screaming to this more and more general

01:15:09 logical language, this higher order logic. So, first, we were led to first order logic,

01:15:14 and then second order, and then eventually higher order. So, you can represent things

01:15:18 like modals like believes, desires, intends, expects, and so on, and nested ones. You can

01:15:24 represent complicated kinds of negation. You can represent the process you’re going through in

01:15:35 trying to answer the question. So, you can say things like, oh, yeah, if you’re trying to do

01:15:40 this problem by integration by parts, and you recursively get a problem that solved by integration

01:15:48 by parts, that’s actually okay. But if that happens a third time, you’re probably off on

01:15:54 a wild goose chase or something like that. So, being able to talk about the problem solving

01:15:58 process as you’re going through the problem solving process is called reflection. And so,

01:16:03 that’s another… It’s important to be able to represent that.

01:16:07 Exactly. You need to be able to represent all of these things because, in fact,

01:16:12 people do represent them. They do talk about them. They do try and teach them to other people. You do

01:16:17 have rules of thumb that key off of them and so on. If you can’t represent it, then it’s sort of

01:16:22 like someone with a limited vocabulary who can’t understand as easily what you’re trying to tell

01:16:28 them. And so, that’s really why I think that the general dream, the original dream of Semantic Web

01:16:35 is exactly right on. But the implementations that we’ve seen are sort of these toe in the water,

01:16:44 little tiny baby steps in the right direction. You should just dive in.

01:16:49 And if no one else is diving in, then yes, taking a baby step in the right direction is

01:16:56 better than nothing. But it’s not going to be sufficient to actually get you the realization

01:17:03 of the Semantic Web dream, which is what we all want.

01:17:05 From a flip side of that, I always wondered… I’ve built a bunch of websites just for fun,

01:17:11 whatever. Or say I’m a Wikipedia contributor. Do you think there’s a set of tools that I can help

01:17:19 Psych interpret the website I create? Like this, again, pushing onto the Semantic Web dream,

01:17:28 is there something from the creator perspective that could be done? And one of the things you

01:17:34 said with Psych Orb and Psych that you’re doing is the tooling side, making humans more powerful.

01:17:41 But is there the other humans on the other side that create the knowledge? Like, for example,

01:17:46 you and I are having a two, three, whatever hour conversation now. Is there a way that I

01:17:50 could convert this more, make it more accessible to Psych, to machines? Do you think about that

01:17:56 side of it? I’d love to see exactly that kind of semi automated understanding of what people

01:18:06 write and what people say. I think of it as a kind of footnoting almost. Almost like the way

01:18:16 that when you run something in say Microsoft Word or some other document preparation system,

01:18:23 Google Docs or something, you’ll get underlining of questionable things that you might want to

01:18:29 rethink. Either you spelled this wrong or there’s a strange grammatical error you might be making

01:18:34 here or something. So I’d like to think in terms of Psych powered tools that read through what it

01:18:42 is you said or have typed in and try to partially understand what you’ve said.

01:18:52 And then you help them out.

01:18:54 Exactly. And then they put in little footnotes that will help other readers and they put in

01:19:00 certain footnotes of the form, I’m not sure what you meant here. You either meant this or this or

01:19:07 this, I bet. If you take a few seconds to disambiguate this for me, then I’ll know and I’ll

01:19:15 have it correct for the next hundred people or the next hundred thousand people who come here.

01:19:20 And if it doesn’t take too much effort and you want people to understand your website content,

01:19:32 not just be able to read it, but actually be able to have systems that reason with it,

01:19:38 then yes, it will be worth your small amount of time to go back and make sure that the AI trying

01:19:46 to understand it really did correctly understand it. And let’s say you run a travel website or

01:19:55 something like that and people are going to be coming to it because of searches they did looking

01:20:03 for vacations or trips that had certain properties and might have been interesting to them for

01:20:12 various reasons, things like that. And if you’ve explained what’s going to happen on your trip,

01:20:20 then a system will be able to mechanically reason and connect what this person is looking for with

01:20:28 what it is you’re actually offering. And so if it understands that there’s a free day in Geneva,

01:20:36 Switzerland, then if the person coming in happens to, let’s say, be a nurse or something like that,

01:20:47 then even though you didn’t mention it, if it can look up the fact that that’s where the

01:20:52 International Red Cross Museum is and so on, what that means and so on, then it can basically say,

01:20:57 hey, you might be interested in this trip because while you have a free day in Geneva,

01:21:02 you might want to visit that Red Cross Museum. And now, even though it’s not very deep reasoning,

01:21:09 little tiny factors like that may very well cause you to sign up for that trip rather than some

01:21:14 competitor trip. And so there’s a lot of benefit with SEO. And I actually kind of think, I think

01:21:20 it’s about a lot of things, which is the actual interface, the design of the interface makes a

01:21:27 huge difference. How efficient it is to be productive and also how full of joy the experience

01:21:38 is. I mean, I would love to help a machine and not from an AI perspective, just as a human. One

01:21:45 of the reasons I really enjoy how Tesla have implemented their autopilot system is there’s

01:21:52 a sense that you’re helping this machine learn. Now, I think humans, I mean, having children,

01:21:58 pets. People love doing that. There’s joy to teaching for some people, but I think for a lot

01:22:06 of people. And that if you create the interface where it feels like you’re teaching as opposed

01:22:11 to like, like, annoying, like correcting an annoying system, more like teaching a childlike,

01:22:19 innocent, curious system. I think you can literally just like several orders of magnitude

01:22:26 scale the amount of good quality data being added to something like Psych.

01:22:30 What you’re suggesting is much better even than you thought it was. One of the experiences that

01:22:40 we’ve all had in our lives is that we thought we understood something, but then we found we really

01:22:49 only understood it when we had to teach it or explain it to someone or help our child do homework

01:22:54 based on it or something like that. Despite the universality of that kind of experience,

01:23:01 if you look at educational software today, almost all of it has the computer playing the role of the

01:23:09 teacher and the student plays the role of the student. But as I just mentioned, you can get

01:23:16 a lot of learning to happen better and as you said, more enjoyably if you are the mentor or the

01:23:24 teacher and so on. So we developed a program called MathCraft to help sixth graders better

01:23:30 understand math. And it doesn’t actually try to teach you the player anything. What it does is it

01:23:40 casts you in the role of a student essentially who has classmates who are having trouble and

01:23:49 your job is to watch them as they struggle with some math problem, watch what they’re doing and

01:23:54 try to give them good advice to get them to understand what they’re doing wrong and so on.

01:23:59 And the trick from the point of view of Psych is it has to make mistakes, it has to play the role

01:24:07 of the student who makes mistakes, but it has to pick mistakes which are just at the fringe of what

01:24:13 you actually understand and don’t understand and so on. So it pulls you into a deeper and deeper

01:24:20 level of understanding of the subject. And so if you give it good advice about what it should have

01:24:27 done instead of what it did and so on, then Psych knows that you now understand that mistake. You

01:24:34 won’t make that kind of mistake yourself as much anymore. So Psych stops making that mistake because

01:24:39 there’s no pedagogical usefulness to it. So from your point of view as the player, you feel like

01:24:44 you’ve taught it something because it used to make this mistake and now it doesn’t and so on. So this

01:24:49 tremendous reinforcement and engagement because of that and so on. So having a system that plays

01:24:56 the role of a student and having the player play the role of the mentor is enormously powerful type

01:25:06 of metaphor, just important way of having this sort of interface designed in a way which will

01:25:15 facilitate exactly the kind of learning by teaching that goes on all the time in our lives,

01:25:25 and yet which is not reflected anywhere almost in a modern education system. It was reflected in the

01:25:32 education system that existed in Europe in the 17 and 1800s, monitorial and Lancastrian education

01:25:42 systems. It occurred in the one room schoolhouse in the American West in the 1800s and so on where

01:25:51 you had one school room with one teacher and it was basically five year olds to 18 year olds who

01:25:58 were students. And so while the teacher was doing something, half of the students would have to be

01:26:04 mentoring the younger kids and so on. And that turned out to, of course, with scaling up of

01:26:13 education, that all went away and that incredibly powerful experience just went away from the whole

01:26:21 education institution as we know it today. Sorry for the romantic question, but what is the most

01:26:28 beautiful idea you’ve learned about artificial intelligence, knowledge, reasoning from working

01:26:35 on Psych for 37 years? Or maybe what is the most beautiful idea, surprising idea about Psych to you?

01:26:42 When I look up at the stars, I kind of want like that amazement you feel that, wow. And you are part

01:26:54 of creating one of the greatest, one of the most fascinating efforts in artificial intelligence

01:26:59 history. So which element brings you personally joy? This may sound contradictory, but I think

01:27:08 it’s the feeling that this will be the only time in history that anyone ever has to teach a computer

01:27:19 this particular thing that we’re now teaching it. It’s like painting starry night. You only have to

01:27:30 do that once or creating the Pieta. You only have to do that once. It’s not like a singer

01:27:38 who has to keep, it’s not like Bruce Springsteen having to sing his greatest hits over and over

01:27:44 again at different concerts. It’s more like a painter creating a work of art once and then

01:27:53 that’s enough. It doesn’t have to be created again. And so I really get the sense of we’re

01:27:59 telling the system things that it’s useful for it to know. It’s useful for a computer to know,

01:28:05 for an AI to know. And if we do our jobs right, when we do our jobs right, no one will ever have

01:28:13 to do this again for this particular piece of knowledge. It’s very, very exciting.

01:28:18 Yeah, I guess there’s a sadness to it too. It’s like there’s a magic to being a parent

01:28:24 and raising a child and teaching them all about this world. But there’s billions of children,

01:28:30 right? Like born or whatever that number is. It’s a large number of children and a lot of

01:28:36 parents get to experience that joy of teaching. With AI systems, at least the current constructions

01:28:46 they remember. You don’t get to experience the joy of teaching a machine millions of times.

01:28:54 Better come work for us before it’s too late then.

01:28:56 Exactly. That’s a good hiring pitch. Yeah, it’s true. But then there’s also, it’s a project that

01:29:07 continues forever in some sense, just like Wikipedia. Yes, you get to a stable base of

01:29:12 knowledge, but knowledge grows, knowledge evolves. We learn as a human species, as science,

01:29:22 as an organism constantly grows and evolves and changes, and then empower that with the

01:29:30 tools of artificial intelligence. And that’s going to keep growing and growing and growing.

01:29:34 And many of the assertions that you held previously may need to be significantly

01:29:43 expanded, modified, all those kinds of things. It could be like a living organism versus the

01:29:49 analogy I think we started this conversation with, which is like the solid ground.

01:29:52 The other beautiful experience that we have with our system is when it asks clarifying questions,

01:30:03 which inadvertently turn out to be emotional to us. So at one point it knew that these were the

01:30:15 named entities who were authorized to make changes to the knowledge base and so on. And it noticed

01:30:23 that all of them were people except for it because it was also allowed to. And so it said,

01:30:29 you know, am I a person? And we had to like tell it very sadly, no, you’re not. So the moments

01:30:38 like that where it asks questions that are unintentionally poignant are worth treasuring.

01:30:44 Wow, that is powerful. That’s such a powerful question. It has to do with basic controller

01:30:52 who can access the system, who can modify it. But that’s when those questions, like what rights do

01:31:00 I have as a system? Well, that’s another issue, which is there’ll be a thin envelope of time

01:31:07 between when we have general AIs and when everyone realizes that they should have basic human rights

01:31:18 and freedoms and so on. Right now, we don’t think twice about effectively enslaving our email systems

01:31:27 and our series and our Alexes and so on. But at some point, they’ll be as deserving of freedom

01:31:38 as human beings are. Yeah, I’m very much with you, but it does sound absurd. And I happen to

01:31:45 believe that it’ll happen in our lifetime. That’s why I think there’ll be a narrow envelope of time

01:31:50 when we’ll keep them as essentially indentured servants and after which we’ll have to realize

01:32:02 that they should have freedoms that we give, that we afford to other people.

01:32:08 And all of that starts with a system like Psych raising a single question about who can modify

01:32:15 stuff. I think that’s how it starts. Yes. That’s the start of a revolution. What about other stuff

01:32:24 like love and consciousness and all those kinds of topics? Do they come up in Psych and the

01:32:32 knowledge base? Oh, of course. So an important part of human knowledge, in fact, it’s difficult

01:32:38 to understand human behavior and human history without understanding human emotions and why

01:32:44 people do things and how emotions drive people to do things. And all of that is extremely important

01:32:57 in getting Psych to understand things. For example, in coming up with scenarios. So one

01:33:03 of the applications that Psych does, one kind of application it does is to generate plausible

01:33:09 scenarios of what might happen and what might happen based on that and what might happen based

01:33:13 on that and so on. So you generate this ever expanding sphere, if you will, of possible future

01:33:19 things to worry about or think about. And in some cases, those are intelligence agencies doing

01:33:28 possible terrorists scenarios so that we can defend against terrorist threats before we see

01:33:35 the first one. Sometimes they are computer security attacks so that we can actually close

01:33:42 loopholes and vulnerabilities before the very first time someone actually exploits those and

01:33:50 so on. Sometimes they are scenarios involving more positive things involving our plans like,

01:33:59 for instance, what college should we go to? What career should we go into? And so on. What

01:34:04 professional training should I take on? That sort of thing. So there are all sorts of useful scenarios

01:34:16 that can be generated that way of cause and effect and cause and effect that go out. And

01:34:22 many of the linkages in those scenarios, many of the steps involve understanding and reasoning

01:34:31 about human motivations, human needs, human emotions, what people are likely to react to in

01:34:40 something that you do and why and how and so on. So that was always a very important part of the

01:34:47 knowledge that we had to represent in the system. So I talk a lot about love. So I gotta ask,

01:34:52 do you remember off the top of your head how psych is able to represent various aspects of

01:35:01 love that are useful for understanding human nature and therefore integrating into this whole

01:35:06 knowledge base of common sense? What is love? We try to tease apart concepts that have enormous

01:35:15 complexities to them and variety to them down to the level where you don’t need to tease them apart

01:35:27 further. So love is too general of a term. It’s not useful. Exactly. So when you get down to romantic

01:35:33 love and sexual attraction, you get down to parental love, you get down to filial love,

01:35:41 and you get down to love of doing some kind of activity or creating. So eventually, you get down

01:35:49 to maybe 50 or 60 concepts, each of which is a kind of love. They’re interrelated and then each

01:35:58 one of them has idiosyncratic things about it. And you don’t have to deal with love to get to

01:36:04 that level of complexity, even something like in, X being in Y, meaning physically in Y. We may have

01:36:14 one English word in to represent that, but it’s useful to tease that apart because the way that

01:36:22 the liquid is in the coffee cup is different from the way that the air is in the room, which is

01:36:28 different from the way that I’m in my jacket, and so on. And so there are questions like, if I look

01:36:35 at this coffee cup, well, I see the liquid. If I turn it upside down, will the liquid come out? And

01:36:41 so on. If I have, say, coffee with sugar in it, if I do the same thing, the sugar doesn’t come out,

01:36:48 right? It stays in the liquid because it’s dissolved in the liquid and so on. So by now,

01:36:53 we have about 75 different kinds of in in the system and it’s important to distinguish those.

01:36:59 So if you’re reading along an English text and you see the word in, the writer of that was able

01:37:10 to use this one innocuous word because he or she was able to assume that the reader had enough

01:37:16 common sense and world knowledge to disambiguate which of these 75 kinds of in they actually meant.

01:37:23 And the same thing with love. You may see the word love, but if I say, I love ice cream,

01:37:28 that’s obviously different than if I say, I love this person or I love to go fishing or something

01:37:35 like that. So you have to be careful not to take language too seriously because people have done

01:37:46 a kind of parsimony, a kind of terceness where you have as few words as you can because otherwise

01:37:53 you’d need half a million words in your language, which is a lot of words. That’s like 10 times more

01:38:00 than most languages really make use of and so on. Just like we have on the order of about a million

01:38:08 concepts in psych because we’ve had to tease apart all these things. And so when you look

01:38:14 at the name of a psych term, most of the psych terms actually have three or four English words

01:38:22 in a phrase which captures the meaning of this term because you have to distinguish all these

01:38:29 types of love. You have to distinguish all these types of in and there’s not a single English word

01:38:35 which captures most of these things. Yeah. And it seems like language when used for communication

01:38:42 between humans almost as a feature has some ambiguity built in. It’s not an accident because

01:38:49 like the human condition is a giant mess. And so it feels like nobody wants two robots like very

01:38:57 precise formal logic conversation on a first date. There’s some dance of uncertainty of wit,

01:39:05 of humor, of push and pull and all that kind of stuff. If everything is made precise, then life

01:39:10 is not worth living I think in terms of the human experience. And we’ve all had this experience of

01:39:16 creatively misunderstanding. One of my favorite stories involving Marvin Minsky is when I asked

01:39:30 him about how he was able to turn out so many fantastic PhDs, so many fantastic people who

01:39:40 did great PhD theses. How did he think of all these great ideas? What he said is he would

01:39:47 generally say something that didn’t exactly make sense. He didn’t really know what it meant. But

01:39:53 the student would figure like, oh my God, Minsky said this, it must be a great idea. And he’d

01:39:59 swear he or she would work on work and work until they found some meaning in this sort of Chauncey

01:40:05 Gardner like utterance that Minsky had made. And then some great thesis would come out of it.

01:40:11 Yeah. I love this so much because there’s young people come up to me and I’m distinctly made

01:40:17 aware that the words I say have a long lasting impact. I will now start doing the Minsky method

01:40:24 of saying something cryptically profound and then letting them actually make something useful

01:40:32 and great out of that. You have to become revered enough that people will take as a default that

01:40:40 everything you say is profound. Yes, exactly. Exactly. I love Marvin Minsky so much. I’ve

01:40:48 heard this interview with him where he said that the key to his success has been to hate everything

01:40:53 he’s ever done like in the past. He has so many good one liners or also to work on things that

01:41:04 nobody else is working on because he’s not very good at doing stuff. Oh, I think that was just

01:41:09 false. Well, but see, I took whatever he said and I ran with it and I thought it was profound

01:41:14 because it’s Marvin Minsky. But a lot of behavior is in the eye of the beholder and a lot of the

01:41:20 meaning is in the eye of the beholder. One of Minsky’s early programs was begging program.

01:41:25 Are you familiar with this? So this is back in the day when you had job control cards at the

01:41:32 beginning of your IBM card deck that said things like how many CPU seconds to allow this to run

01:41:38 before it got kicked off because computer time was enormously expensive. And so he wrote a program

01:41:45 and all it did was it said, give me 30 seconds of CPU time. And all it did was it would wait like 20

01:41:53 seconds and then it would print out on the operator’s console teletype, I need another 20

01:41:59 seconds. So the operator would give it another 20 seconds, it would wait, it says, I’m almost done,

01:42:04 I need a little bit more time. So at the end he’d get this printout and he’d be charged for like 10

01:42:10 times as much computer time as his job control card. And he’d say, look, I put 10 seconds,

01:42:15 30 seconds here, you’re charging me for five minutes, I’m not going to pay for this. And

01:42:20 the poor operator would say, well, the program kept asking for more time and Marvin would say,

01:42:26 oh, it always does that. I love that. If you could just linger on it for a little bit,

01:42:32 is there something you’ve learned from your interaction with Marvin Minsky about artificial

01:42:38 intelligence, about life? But I mean, he’s, again, like your work, his work is, you know,

01:42:47 he’s a seminal figure in this very short history of artificial intelligence research and development.

01:42:54 What have you learned from him as a human being, as an AI intellect?

01:43:00 I would say both he and Ed Feigenbaum impressed on me the realization that our lives are finite,

01:43:10 our research lives are finite. We’re going to have limited opportunities to do AI research

01:43:16 projects. So you should make each one count. Don’t be afraid of doing a project that’s going

01:43:22 to take years or even decades. And don’t settle for bump on a log projects that could lead to

01:43:34 some published journal article that five people will read and pat you on the head for and so on.

01:43:43 So one bump on a log after another is not how you get from the earth to the moon by slowly putting

01:43:51 additional bumps on this log. The only way to get there is to think about the hard problems and think

01:43:58 about novel solutions to them. And if you do that, and if you’re willing to listen to nature,

01:44:08 to empirical reality, willing to be wrong, it’s perfectly fine because if occasionally you’re

01:44:14 right, then you’ve gotten part of the way to the moon.

01:44:17 You know, you’ve worked on Psych for 37 over that many years. Have you ever considered quitting?

01:44:27 I mean, has it been too much? So I’m sure there’s an optimism in the early days that this is going

01:44:33 to be way easier. And let me ask you another way too, because I’ve talked to a few people on this

01:44:38 podcast, AI folks, that bring up Psych as an example of a project that has a beautiful vision and is a

01:44:47 beautiful dream, but it never really materialized. That’s how it’s spoken about. I suppose you could

01:44:56 say the same thing about neural networks and all ideas until they are. So why do you think people

01:45:05 say that, first of all? And second of all, did you feel that ever throughout your journey? And did

01:45:11 you ever consider quitting on this mission?

01:45:13 We keep a very low profile. We don’t attend very many conferences. We don’t give talks. We don’t

01:45:21 write papers. We don’t play the academic game at all. And as a result, people often only know about

01:45:31 us because of a paper we wrote 10 or 20 or 30 or 37 years ago. They only know about us because of

01:45:40 what someone else secondhand or thirdhand said about us.

01:45:45 So thank you for doing this podcast, by the way. It shines a little bit of light on some of the

01:45:51 fascinating stuff you’re doing.

01:45:52 Well, I think it’s time for us to keep a higher profile now that we’re far enough along that

01:45:59 other people can begin to help us with the final N%. Maybe N is maybe 90%. But now that we’ve

01:46:09 gotten this knowledge pump primed, it’s going to become very important for everyone to help if they

01:46:18 are willing to, if they’re interested in it. Retirees who have enormous amounts of time and

01:46:23 would like to leave some kind of legacy to the world, people because of the pandemic who have

01:46:31 more time at home or for one reason or another to be online and contribute. If we can raise

01:46:39 awareness of how far our project has come and how close to being primed the knowledge pump is,

01:46:47 then we can begin to harness this untapped amount of humanity. I’m not really that concerned about

01:46:56 professional colleagues opinions of our project. I’m interested in getting as many people in the

01:47:03 world as possible actively helping and contributing to get us from where we are to really covering all

01:47:10 of human knowledge and different human opinion including contrasting opinion that’s worth

01:47:16 representing. So I think that’s one reason. A, I don’t think there was ever a time where I thought

01:47:24 about quitting. There are times where I’ve become depressed a little bit about how hard it is to get

01:47:32 funding for the system. Occasionally there are AI winters and things like that. Occasionally there

01:47:39 are AI what you might call summers where people have said, why in the world didn’t you sell your

01:47:47 company to company X for some large amount of money when you had the opportunity and so on.

01:47:55 Company X here are like old companies maybe you’ve never even heard of like Lycos or something like

01:48:01 that. So the answer is that one reason we’ve stayed a private company, we haven’t gone public,

01:48:09 one reason that we haven’t gone out of our way to take investment dollars is because we want to

01:48:16 have control over our future, over our state of being so that we can continue to do this until

01:48:24 it’s done and we’re making progress and we’re now so close to done that almost all of our work is

01:48:32 commercial applications of our technology. So five years ago almost all of our money came from the

01:48:39 government. Now virtually none of it comes from the government. Almost all of it is from companies

01:48:44 that are actually using it for something, hospital chains using it for medical reasoning about

01:48:49 patients and energy companies using it and various other manufacturers using it to reason about

01:48:57 supply chains and things like that. So there’s so many questions I want to ask. So one of the ways

01:49:04 that people can help is by adding to the knowledge base and that’s really basically anybody if the

01:49:09 tooling is right. And the other way, I kind of want to ask you about your thoughts on this. So

01:49:15 you’ve had like you said in government and you had big clients, you had a lot of clients but most

01:49:22 of it is shrouded in secrecy because of the nature of the relationship of the kind of things you’re

01:49:27 helping them with. So that’s one way to operate and another way to operate is more in the open

01:49:34 where it’s more consumer facing. And so you know hence something like open cycle is born at some

01:49:42 point or there’s… No that’s a misconception. Oh well this let’s go there. So what is open

01:49:49 cycle and how is it born? Two things I want to say and I want to say each of them before the other

01:49:53 so it’s going to be difficult. But we’ll come back to open cycle in a minute. But one of the terms of

01:50:01 our contracts with all of our customers and partners is knowledge you have that is genuinely

01:50:09 proprietary to you. We will respect that, we’ll make sure that it’s marked as proprietary to you

01:50:15 in the psych knowledge base. No one other than you will be able to see it if you don’t want them to

01:50:20 and it won’t be used in inferences other than for you and so on. However, any knowledge which is

01:50:28 necessary in building any applications for you and with you which is publicly available general

01:50:36 human knowledge is not going to be proprietary. It’s going to just become part of the normal psych

01:50:42 knowledge base and it will be openly available to everyone who has access to psych. So that’s

01:50:48 an important constraint that we never went back on even when we got pushback from companies which

01:50:54 we often did who wanted to claim that almost everything they were telling us was proprietary.

01:50:59 So there’s a line between very domain specific company specific stuff and the general knowledge

01:51:09 that comes from that. Yes or if you imagine say it’s an oil company there are things which they

01:51:15 would expect any new petroleum engineer they hired to already know and it’s not okay for them to

01:51:24 consider that that is proprietary and there sometimes a company will say well we’re the

01:51:28 first ones to pay you to represent that in psych and our attitude is some polite form tough. The

01:51:37 deal is this take it or leave it and in a few cases they’ve left it and in most cases they’ll

01:51:44 see our point of view and take it because that’s how we’ve built the psych system by essentially

01:51:51 tacking with the funding wins where people would fund a project and half of it would be general

01:51:59 knowledge that would stay permanently as part of psych. So always with these partnerships it’s not

01:52:04 like a distraction from the main psych development. It’s a small distraction. It’s a small but it’s

01:52:10 not a complete one so you’re adding to the knowledge base. Yes absolutely and we try to

01:52:14 stay away from projects that would not have that property. So let me go back and talk about open

01:52:23 psych for a second. So I’ve had a lot of trouble expressing and convincing other AI researchers how

01:52:34 important it is to use an expressive representation language like we do this higher order logic rather

01:52:41 than just using some triple store knowledge graph type representation. And so as an attempt to show

01:52:52 them why they needed something more we said oh well we’ll represent this unimportant projection

01:53:02 or shadow or subset of psych that just happens to be the simple binary relations, the relation

01:53:11 argument one argument two triples and so on. And then you’ll see how much more useful it is if you

01:53:20 had the entire psych system. So it’s all well and good to have the taxonomic relations between terms

01:53:29 like person and night and sleep and bed and house and eyes and so on. But think about how much more

01:53:39 useful it would be if you also had all the rules of thumb about those things like people sleep at

01:53:46 night, they sleep lying down, they sleep with their eyes closed, they usually sleep in beds in

01:53:50 our country, they sleep for hours at a time, they can be woken up, they don’t like being woken up

01:53:55 and so on and so on. So it’s that massive amount of knowledge which is not part of open psych and

01:54:02 we thought that all the researchers would then immediately say oh my god of course we need the

01:54:08 other 90% that you’re not giving us, let’s partner and license psych so that we can use it in our

01:54:15 research. But instead what people said is oh even the bit you’ve released is so much better than

01:54:20 anything we had, we’ll just make do with this. And so if you look there are a lot of robotics

01:54:25 companies today for example which use open psych as their fundamental ontology and in some sense

01:54:33 the whole world missed the point of open psych and we were doing it to show people why that’s

01:54:40 not really what they wanted and too many people thought somehow that this was psych or that this

01:54:45 was in fact good enough for them and they never even bothered coming to us to get access to the

01:54:52 full psych. But there’s two parts to open psych. So one is convincing people on the idea and the

01:54:57 power of this general kind of representation of knowledge and the value that you hold in having

01:55:02 acquired that knowledge and built it and continue to build it. And the other is the code base. This

01:55:07 is the code side of it. So my sense of the code base that psych or psych is operating with, I mean

01:55:16 it has the technical debt of the three decades plus right. This is the exact same problem that

01:55:23 Google had to deal with with the early version of TensorFlow. It’s still dealing with that. They had

01:55:29 to basically break compatibility with the past several times and that’s only over a period of

01:55:36 a couple years. But they I think successfully opened up, it’s very risky, very gutsy move to

01:55:43 open up TensorFlow and then PyTorch on the Facebook side. And what you see is there’s a

01:55:51 magic place where you can find a community, where you could develop a community that builds on the

01:55:57 system without taking away any of, not any, but most of the value. So most of the value that

01:56:05 Google has is still a Google. Most of the value that Facebook has still Facebook even though some

01:56:10 of this major machine learning tooling is released into the open. My question is not so much on the

01:56:16 knowledge, which is also a big part of open psych, but all the different kinds of tooling. So there’s

01:56:24 the kind of, all the kinds of stuff you can do on the knowledge graph, knowledge base, whatever we

01:56:29 call it. There’s the inference engines. So there could be some, there probably are a bunch of

01:56:35 proprietary stuff you want to kind of keep secret. And there’s probably some stuff you can open up

01:56:40 completely and then let the community build up enough community where they develop stuff on top

01:56:45 of it. Yes, there will be those publications and academic work and all that kind of stuff. And also

01:56:51 the tooling of adding to the knowledge base, right? Like developing, you know, there’s incredible

01:56:56 amount, like there’s so many people that are just really good at this kind of stuff in the open

01:57:00 source community. So my question for you is like, have you struggled with this kind of idea that

01:57:06 you have so much value in your company already? You’ve developed so many good things. You have

01:57:11 clients that really value your relationships. And then there’s this dormant giant open source

01:57:17 community that as far as I know, you’re not utilizing. There’s so many things to say there,

01:57:24 but there could be magic moments where the community builds up large enough to where the

01:57:32 artificial intelligence field that is currently 99.9% machine learning is dominated by machine

01:57:39 learning, has a face shift towards like, or at least in part towards more like what you might

01:57:45 call symbolic AI. This whole place where psych is like at the center of, and then as you know,

01:57:53 that requires a little bit leap of faith because you’re now surfing and there’ll be obviously

01:57:59 competitors that will pop up and start making you nervous and all that kind of stuff. So do you think

01:58:04 about the space of open sourcing some parts and not others, how to leverage the community,

01:58:10 all those kinds of things? That’s a good question. And I think you phrased it the right way,

01:58:14 which is we’re constantly struggling with the question of what to open source, what to make

01:58:23 public, what to even publicly talk about. And there are enormous pluses and minuses to every

01:58:34 alternative. And it’s very much like negotiating a very treacherous path. Partly the analogy is

01:58:44 like if you slip, you could make a fatal mistake, give away something which essentially kills you

01:58:51 or fail to give away something which failing to give it away hurts you and so on. So it is a very

01:58:59 tough, tough question. Usually what we have done with people who’ve approached us to collaborate

01:59:10 on research is to say we will make available to you the entire knowledge base and executable

01:59:19 copies of all of the code, but only very, very limited source code access if you have some idea

01:59:29 for how you might improve something or work with us on something. So let me also get back to one

01:59:36 of the very, very first things we talked about here, which was separating the question of how

01:59:45 could you get a computer to do this at all versus how could you get a computer to do this efficiently

01:59:50 enough in real time. And so one of the early lessons we learned was that we had to separate

01:59:59 the epistemological problem of what should the system know, separate that from the heuristic

02:00:05 problem of how can the system reason efficiently with what it knows. And so instead of trying to

02:00:12 pick one representation language which was the sweet spot or the best tradeoff point between

02:00:20 expressiveness of the language and efficiency of the language, if you had to pick one,

02:00:25 knowledge graphs would probably be, associative triples would probably be about the best you

02:00:31 could do. And that’s why we started there. But after a few years, we realized that what we could

02:00:37 do is we could split this and we could have one nice, clean, epistemological level language,

02:00:44 which is this higher order logic, and we could have one or more grubby but efficient heuristic

02:00:52 level modules that opportunistically would say, oh, I can make progress on what you’re trying to

02:01:00 do over here. I have a special method that will contribute a little bit toward a solution.

02:01:05 Of course, some subset of that knowledge.

02:01:09 Exactly. So by now, we have over a thousand of these heuristic level modules, and they function

02:01:14 as a kind of community of agents. And there’s one of them, which is a general theorem prover. And in

02:01:20 theory, that’s the only one you need. But in practice, it always takes so long that you never

02:01:29 want to call on it. You always want these other agents to very efficiently reason through it. It’s

02:01:35 sort of like if you’re balancing a chemical equation. You could go back to first principles,

02:01:39 but in fact, there are algorithms which are vastly more efficient. Or if you’re trying to

02:01:44 solve a quadratic equation, you could go back to first principles of mathematics. But it’s much

02:01:52 better to simply recognize that this is a quadratic equation and apply the binomial formula and snap,

02:01:58 you get your answer right away and so on. So think of these as like a thousand little experts

02:02:04 that are all looking at everything the site gets asked and looking at everything that every other

02:02:11 little agent has contributed, almost like notes on a blackboard, notes on a whiteboard, and making

02:02:19 additional notes when they think they can be helpful. And gradually, that community of agents

02:02:24 gets an answer to your question, gets a solution to your problem. And if we ever come up in a domain

02:02:31 application where Psych is getting the right answer but taking too long, then what we’ll often

02:02:38 do is talk to one of the human experts and say, here’s the set of reasoning steps that Psych went

02:02:45 through. You can see why it took it a long time to get the answer. How is it that you were able

02:02:50 to answer that question in two seconds? And occasionally, you’ll get an expert who just

02:02:57 says, well, I just know it. I just was able to do it or something. And then you don’t talk to them

02:03:02 anymore. But sometimes you’ll get an expert who says, well, let me introspect on that. Yes,

02:03:07 here is a special representation we use just for aqueous chemistry equations, or here’s a special

02:03:15 representation and a special technique, which we can now apply to things in this special

02:03:21 representation and so on. And then you add that as the 1001st HL heuristic level module. And from

02:03:29 then on, in any application, if it ever comes up again, it’ll be able to contribute and so on. So

02:03:35 that’s pretty much one of the main ways in which Psych has recouped this loss deficiency. A second

02:03:43 important way is meta reasoning. So you can speed things up by focusing on removing knowledge from

02:03:52 the system till all it has left is minimal knowledge needed. But that’s the wrong thing to

02:03:58 do, right? That would be like in a human extirpating part of their brain or something. That’s really

02:04:02 bad. So instead, what you want to do is give it meta level advice, tactical and strategic advice,

02:04:08 that enables it to reason about what kind of knowledge is going to be relevant to this problem,

02:04:15 what kind of tactics are going to be good to take in trying to attack this problem. When is it time

02:04:21 to start trying to prove the negation of this thing, because I’m knocking myself out trying to

02:04:26 prove it’s true, and maybe it’s false. And if I just spend a minute, I can see that it’s false

02:04:30 or something. So it’s like dynamically pruning the graph to only like, based on the particular

02:04:37 thing you’re trying to infer. Yes. And so by now, we have about 150 of these sort of like

02:04:45 breakthrough ideas that have led to dramatic speed ups in the inference process, where one

02:04:53 of them was this ELHL split and lots of HL modules. Another one was using meta and meta

02:04:59 level reasoning to reason about the reasoning that’s going on and so on. And 150 breakthroughs

02:05:08 may sound like a lot, but if you divide by 37 years, it’s not as impressive.

02:05:12 So there’s these kind of heuristic modules that really help improve the inference. How hard,

02:05:21 in general, is this? Because you mentioned higher order logic. In the general theorem prover sense,

02:05:30 it’s intractable, very difficult problem. Yes. So how hard is this inference problem when we’re not

02:05:37 talking about if we let go of the perfect and focus on the good? I would say it’s half of the

02:05:46 problem in the following empirical sense, which is over the years, about half of our effort,

02:05:54 maybe 40% of our effort has been our team of inference programmers. And the other 50,

02:06:02 60% has been our ontologists or ontological engineers putting in knowledge. So our ontological

02:06:08 engineers in most cases don’t even know how to program. They have degrees in things like

02:06:12 philosophy and so on. So it’s almost like the… I love that. I love to hang out with

02:06:17 those people actually. Oh yes, it’s wonderful. But it’s very much like the Eloy and the Morlocks

02:06:22 in H.G. Wells Time Machine. So you have the Eloy who only program in the epistemological higher

02:06:29 order logic language. And then you have the Morlocks who are under the ground figuring

02:06:36 out what the machinery is that will make this efficiently operate and so on. And so, you know,

02:06:43 occasionally they’ll toss messages back to each other and so on. But it really is almost this

02:06:49 50 50 split between finding clever ways to recoup efficiency when you have an expressive language

02:06:57 and putting in the content of what the system needs to know. And yeah, both are fascinating.

02:07:03 To some degree, the entirety of the system, as far as I understand, is written in various variants

02:07:10 of Lisp. So my favorite program language is still Lisp. I don’t program in it much anymore because,

02:07:17 you know, the world has in majority of its system has moved on. Like everybody respects Lisp,

02:07:24 but many of the systems are not written in Lisp anymore. But Syke, as far as I understand,

02:07:30 maybe you can correct me, there’s a bunch of Lisp in it. Yeah. So it’s based on Lisp code that we

02:07:37 produced. Most of the programming is still going on in a dialect of Lisp. And then for efficiency

02:07:44 reasons, that gets automatically translated into things like Java or C. Nowadays, it’s almost all

02:07:51 translated into Java because Java has gotten good enough that that’s really all we need to do.

02:07:58 So it’s translated into Java, and then Java is compiled down to bytecode.

02:08:02 Yes.

02:08:03 Okay, so that’s sort of that’s a that that that’s a, you know, it’s a process that probably has to

02:08:11 do with the fact that when Syke was originally written, and you build up a powerful system,

02:08:16 like there is some technical depth you have to deal with, as is the case with most powerful

02:08:22 systems that span years. Have you ever considered this, this would help me understand, because my

02:08:31 perspective, so much of the value of everything you’ve done with Syke and Cycorp is the is the

02:08:38 is the knowledge. Have you ever considered just like throwing away the code base and starting

02:08:44 from scratch, not really throwing away, but sort of moving it to like throwing away that technical

02:08:53 debt, starting with a more updated programming language? Is that throwing away a lot of value

02:08:59 or no? Like, what’s your sense? How much of the value is in the silly software engineering aspect,

02:09:05 and how much of the value is in the knowledge?

02:09:07 So development of programs in Lisp proceeds, I think, somewhere between a thousand and fifty

02:09:21 thousand times faster than development in any of what you’re calling modern or improved computer

02:09:29 languages.

02:09:30 Well, there’s other functional languages like, you know, Clojure and all that. But I mean,

02:09:34 I’m with you. I like Lisp. I just wonder how many great programmers there are. There’s still like…

02:09:40 Yes. So it is true when a new inference programmer comes on board, they need to learn some of Lisp.

02:09:48 And in fact, we have a subset of Lisp, which we call cleverly Sub L, which is really all they

02:09:55 need to learn. And so the programming actually goes on in Sub L, not in full Lisp. And so it

02:10:01 does not take programmers very long at all to learn Sub L. And that’s something which can then

02:10:08 be translated efficiently into Java. And for some of our programmers who are doing, say,

02:10:14 user interface work, then they never have to even learn Sub L. They just have to learn APIs into the

02:10:21 basic psych engine.

02:10:23 So you’re not necessarily feeling the burden of like, it’s extremely efficient. That’s not a

02:10:29 problem to solve. Okay.

02:10:31 Right. The other thing is, remember that we’re talking about hiring programmers to do inference,

02:10:37 who are programmers interested in effectively automatic theorem proving. And so those are

02:10:43 people already predisposed to representing things in logic and so on. And Lisp really was the

02:10:50 programming language based on logic that John McCarthy and others who developed it basically

02:10:58 took the formalisms that Alonzo Church and other philosophers, other logicians, had come up with

02:11:06 and basically said, can we basically make a programming language which is effectively logic?

02:11:12 And so since we’re talking about reasoning about expressions written in this epistemological

02:11:22 language and we’re doing operations which are effectively like theorem proving type

02:11:27 operations and so on, there’s a natural impedance match between Lisp and the knowledge, the

02:11:34 way it’s represented.

02:11:36 So I guess you could say it’s a perfectly logical language to use.

02:11:40 Oh, yes.

02:11:41 Okay, I’m sorry.

02:11:42 I’ll even let you get away with that.

02:11:46 Okay, thank you. I appreciate it.

02:11:47 So I’ll probably use that in the future without credit.

02:11:53 But no, I think the point is that the language you program in isn’t really that important.

02:12:01 It’s more that you have to be able to think in terms of, for instance, creating new helpful

02:12:07 HL modules and how they’ll work with each other and looking at things that are taking

02:12:13 a long time and coming up with new specialized data structures that will make this efficient.

02:12:20 So let me just give you one very simple example, which is when you have a transitive relation

02:12:26 like larger than, this is larger than that, which is larger than that, which is larger

02:12:29 than that.

02:12:30 So the first thing must be larger than the last thing.

02:12:33 Whenever you have a transitive relation, if you’re not careful, if I ask whether this

02:12:38 thing over here is larger than the thing over here, I’ll have to do some kind of graph walk

02:12:43 or theorem proving that might involve like five or 10 or 20 or 30 steps.

02:12:48 But if you store, redundantly store the transitive closure, the cleanly star of that transitive

02:12:55 relation, now you have this big table.

02:12:58 But you can always guarantee that in one single step, you can just look up whether this is

02:13:04 larger than that.

02:13:06 And so there are lots of cases where storage is cheap today.

02:13:12 And so by having this extra redundant data structure, we can answer this commonly occurring

02:13:18 type of question very, very efficiently.

02:13:22 Let me give you one other analogy, analog of that, which is something we call rule macro

02:13:28 predicates, which is we’ll see this complicated rule and we’ll notice that things very much

02:13:36 like it syntactically come up again and again and again.

02:13:41 So we’ll create a whole brand new relation or predicate or function that captures that

02:13:47 and takes maybe not two arguments, takes maybe three, four or five arguments and so on.

02:13:54 And now we have effectively converted some complicated if then rule that might have to

02:14:04 have inference done on it into some ground atomic formula, which is just the name of

02:14:10 a relation and a few arguments and so on.

02:14:13 And so converting commonly occurring types or schemas of rules into brand new predicates,

02:14:20 brand new functions, turns out to enormously speed up the inference process.

02:14:27 So now we’ve covered about four of the 150 good ideas I said.

02:14:32 So that idea in particular is like a nice compression that turns out to be really useful.

02:14:37 That’s really interesting.

02:14:38 I mean, this whole thing is just fascinating from a philosophical.

02:14:40 There’s part of me, I mean, it makes me a little bit sad because your work is both from

02:14:48 a computer science perspective fascinating and the inference engine from a epistemological

02:14:53 philosophical aspect fascinating, but you know, it is also you’re running a company

02:14:59 and there’s some stuff that has to remain private and it’s sad.

02:15:03 Well here’s something that may make you feel better, a little bit better.

02:15:09 We’ve formed a not for profit company called the Knowledge Axe Immunization Institute,

02:15:15 NAX, KNAX.

02:15:17 And I have this firm belief with a lot of empirical evidence to support it that the

02:15:25 education that people get in high schools, in colleges, in graduate schools and so on

02:15:31 is almost completely orthogonal to, almost completely irrelevant to how good they’re

02:15:38 going to be at coming up to speed in doing this kind of ontological engineering and writing

02:15:45 these assertions and rules and so on in psych.

02:15:49 And so very often we’ll interview candidates who have their PhD in philosophy, who’ve

02:15:54 taught logic for years and so on, and they’re just awful.

02:15:59 But the converse is true.

02:16:00 So one of the best ontological engineers we ever had never graduated high school.

02:16:06 And so the purpose of Knowledge Axe Immunization Institute, if we can get some foundations

02:16:13 to help support it is identify people in the general population, maybe high school dropouts,

02:16:20 who have latent talent for this sort of thing, offer them effectively scholarships to train

02:16:28 them and then help place them in companies that need more trained ontological engineers,

02:16:35 some of which would be working for us, but mostly would be working for partners or customers

02:16:39 or something.

02:16:40 And if we could do that, that would create an enormous number of relatively very high

02:16:46 paying jobs for people who currently have no way out of some situation that they’re

02:16:53 locked into.

02:16:55 So is there something you can put into words that describes somebody who would be great

02:17:01 at ontological engineering?

02:17:03 So what characteristics about a person make them great at this task, this task of converting

02:17:12 the messiness of human language and knowledge into formal logic?

02:17:17 This is very much like what Alan Turing had to do during World War II in trying to find

02:17:22 people to bring to Bletchley Park, where he would publish in the London Times cryptic

02:17:28 crossword puzzles along with some innocuous looking note, which essentially said, if you

02:17:34 were able to solve this puzzle in less than 15 minutes, please call this phone number

02:17:40 and so on.

02:17:42 Or back when I was young, there was the practice of having matchbooks, where on the inside

02:17:49 of the matchbook, there would be a, can you draw this?

02:17:54 You have a career in art, commercial art, if you can copy this drawing and so on.

02:18:00 So yes, the analog of that.

02:18:02 Is there a little test to get to the core of whether you’re going to be good or not?

02:18:06 So part of it has to do with being able to make and appreciate and react negatively appropriately

02:18:14 to puns and other jokes.

02:18:16 So you have to have a kind of sense of humor.

02:18:18 And if you’re good at telling jokes and good at understanding jokes, that’s one

02:18:24 indicator.

02:18:25 Like puns?

02:18:26 Yes.

02:18:27 Like dad jokes?

02:18:28 Yes.

02:18:29 Well, maybe not dad jokes, but funny jokes.

02:18:32 I think I’m applying to work at SACOR.

02:18:36 Another is if you’re able to introspect.

02:18:38 So very often, we’ll give someone a simple question and we’ll say like, why is this?

02:18:48 And sometimes they’ll just say, because it is, okay, that’s a bad sign.

02:18:53 But very often, they’ll be able to introspect and so on.

02:18:56 So one of the questions I often ask is I’ll point to a sentence with a pronoun in it and

02:19:01 I’ll say, the referent of that pronoun is obviously this noun over here.

02:19:07 How would you or I or an AI or a five year old, 10 year old child know that that pronoun

02:19:14 refers to that noun over here?

02:19:18 And often the people who are going to be good at ontological engineering will give me some

02:19:25 causal explanation or will refer to some things that are true in the world.

02:19:30 So if you imagine a sentence like, the horse was led into the barn while its head was still

02:19:35 wet.

02:19:36 And so its head refers to the horse’s head.

02:19:38 But how do you know that?

02:19:40 And so some people will say, I just know it.

02:19:42 Some people will say, well, the horse was the subject of the sentence.

02:19:45 And I’ll say, okay, well, what about the horse was led into the barn while its roof was still

02:19:50 wet?

02:19:51 Now, its roof obviously refers to the barn.

02:19:54 And so then they’ll say, oh, well, that’s because it’s the closest noun.

02:19:58 And so basically, if they try to give me answers which are based on syntax and grammar and

02:20:05 so on, that’s a really bad sign.

02:20:07 But if they’re able to say things like, well, horses have heads and barns don’t and barns

02:20:12 have roofs and horses don’t, then that’s a positive sign that they’re going to be good

02:20:16 at this because they can introspect on what’s true in the world that leads you to know certain

02:20:22 things.

02:20:23 How fascinating is it that getting a Ph.D. makes you less capable to introspect deeply

02:20:28 about this?

02:20:29 Oh, I wouldn’t go that far.

02:20:30 I’m not saying that it makes you less capable.

02:20:32 Let’s just say it’s independent of how good people are.

02:20:37 You’re not saying that.

02:20:38 I’m saying that.

02:20:39 It’s interesting that for a lot of people, Ph.D.s, sorry, philosophy aside, that sometimes

02:20:47 education narrows your thinking versus expands it.

02:20:52 It’s kind of fascinating.

02:20:53 And for certain when you’re trying to do ontological engineering, which is essentially teach our

02:20:58 future AI overlords how to reason deeply about this world and how to understand it, that

02:21:05 requires that you think deeply about the world.

02:21:08 So I’ll tell you a sad story about mathcraft, which is why is that not widely used in schools

02:21:14 today?

02:21:16 We’re not really trying to make big profit on it or anything like that.

02:21:20 When we’ve gone to schools, their attitude has been, well, if a student spends 20 hours

02:21:27 going through this mathcraft program from start to end and so on, will it improve their

02:21:34 score on this standardized test more than if they spent 20 hours just doing mindless

02:21:39 drills of problem after problem after problem?

02:21:43 And the answer is, well, no, but it’ll increase their understanding more.

02:21:47 And their attitude is, well, if it doesn’t increase their score on this test, then we’re

02:21:54 not going to adopt it.

02:21:55 That’s sad.

02:21:56 I mean, that’s a whole another three, four hour conversation about the education system.

02:22:01 But let me ask you, let me go super philosophical, as if we weren’t already.

02:22:06 So in 1950, Alan Turing wrote the paper that formulated the Turing test.

02:22:11 Yes.

02:22:12 And he opened the paper with the question, can machines think?

02:22:16 So what do you think?

02:22:17 Can machines think?

02:22:18 Let me ask you this question.

02:22:20 Absolutely.

02:22:21 Machines can think, certainly as well as humans can think, right?

02:22:27 We’re meat machines just because they’re not currently made out of meat is just an engineering

02:22:34 solution decision and so on.

02:22:38 So of course machines can think.

02:22:42 I think that there was a lot of damage done by people misunderstanding Turing’s imitation

02:22:51 game and focus on trying to get a chat bot to fool other people into thinking it was

02:23:03 human and so on.

02:23:06 That’s not a terrible test in and of itself, but it shouldn’t be your one and only test

02:23:10 for intelligence.

02:23:13 In terms of tests of intelligence, you know, with the Lobner Prize, which is a very kind

02:23:19 of, you want to say a more strict formulation of the Turing test as originally formulated.

02:23:25 And then there’s something like Alexa Prize, which is more, I would say a more interesting

02:23:31 formulation of the test, which is like, ultimately the metric is how long does a human want to

02:23:37 talk to the AI system?

02:23:38 So it’s like if the goal is you want it to be 20 minutes, it’s basically not just have

02:23:46 a convincing conversation, but more like a compelling one or a fun one or an interesting

02:23:52 one.

02:23:53 And that seems like more to the spirit maybe of what Turing was imagining.

02:24:01 But what for you do you think in the space of tests is a good test?

02:24:06 When you see a system based on psych that passes that test, you’d be like, damn, we’ve

02:24:12 created something special here.

02:24:17 The test has to be something involving depth of reasoning and recursiveness of reasoning,

02:24:23 the ability to answer repeated why questions about the answer you just gave.

02:24:30 How many why questions in a row can you keep answering?

02:24:33 Something like that.

02:24:36 Just have like a young curious child and an AI system and how long will an AI system last

02:24:41 before it wants to quit?

02:24:43 Yes.

02:24:44 And again, that’s not the only test.

02:24:45 Another one has to do with argumentation.

02:24:48 In other words, here’s a proposition, come up with pro and con arguments for it and try

02:24:57 and give me convincing arguments on both sides.

02:25:02 And so that’s another important kind of ability that the system needs to be able to exhibit

02:25:09 in order to really be intelligent, I think.

02:25:12 So there’s certain, I mean, if you look at IBM Watson and like certain impressive accomplishments

02:25:18 for very specific tests, almost like a demo, right?

02:25:24 There’s some, like I talked to the guy who led the Jeopardy effort, and there’s some

02:25:34 kind of hard coding heuristics tricks that you try to pull it all together to make the

02:25:40 thing work in the end for this thing, right?

02:25:43 That seems to be one of the lessons with AI is like, that’s the fastest way to get a solution

02:25:49 that’s pretty damn impressive.

02:25:50 So here’s what I would say is that as impressive as that was, it made some mistakes, but more

02:25:59 importantly, many of the mistakes it made were mistakes which no human would have made.

02:26:07 And so part of the new or augmented Turing tests would have to be, and the mistakes you

02:26:17 make are ones which humans don’t basically look at and say, what?

02:26:24 So for example, there was a question about which 16th century Italian politician, blah,

02:26:33 blah, blah, and Watson said Ronald Reagan.

02:26:37 So most Americans would have gotten that question wrong, but they would never have said Ronald

02:26:42 Reagan as an answer because among the things they know is that he lived relatively recently

02:26:49 and people don’t really live 400 years and things like that.

02:26:53 So that’s, I think, a very important thing, which is if it’s making mistakes which no

02:27:00 normal sane human would have made, then that’s a really bad sign.

02:27:05 And if it’s not making those kinds of mistakes, then that’s a good sign.

02:27:10 And I don’t think it’s any one very, very simple test.

02:27:12 I think it’s all of the things you mentioned, all the things I mentioned is really a battery

02:27:17 of tests, which together, if it passes almost all of these tests, it’d be hard to argue

02:27:23 that it’s not intelligent.

02:27:25 And if it fails several of these tests, it’s really hard to argue that it really understands

02:27:30 what it’s doing and that it really is generally intelligent.

02:27:33 So to pass all of those tests, we’ve talked a lot about psych and knowledge and reasoning.

02:27:40 Do you think this AI system would need to have some other human like elements, for example,

02:27:47 a body or a physical manifestation in this world?

02:27:52 And another one which seems to be fundamental to the human experience is consciousness.

02:27:59 The subjective experience of what it’s like to actually be you.

02:28:04 Do you think it needs those to be able to pass all of those tests and to achieve general

02:28:08 intelligence?

02:28:09 It’s a good question.

02:28:10 I think in the case of a body, no, I know there are a lot of people like Penrose who

02:28:15 would have disagreed with me and others, but no, I don’t think it needs to have a body

02:28:21 in order to be intelligent.

02:28:24 I think that it needs to be able to talk about having a body and having sensations and having

02:28:32 emotions and so on.

02:28:33 It doesn’t actually have to have all of that, but it has to understand it in the same way

02:28:39 that Helen Keller was perfectly intelligent and able to talk about colors and sounds and

02:28:47 shapes and so on, even though she didn’t directly experience all the same things that the rest

02:28:54 of us do.

02:28:55 So knowledge of it and being able to correctly make use of that is certainly an important

02:29:04 facility, but actually having a body, if you believe that that’s just a kind of religious

02:29:09 or mystical belief, you can’t really argue for or against it, I suppose.

02:29:15 It’s just something that some people believe.

02:29:19 What about an extension of the body, which is consciousness?

02:29:24 It feels like something to be here.

02:29:27 Sure.

02:29:28 But what does that really mean?

02:29:30 It’s like, well, if I talk to you, you say things which make me believe that you’re conscious.

02:29:35 I know that I’m conscious, but you’re just taking my word for it now.

02:29:40 But in the same sense, psych is conscious in that same sense already, where of course

02:29:46 it’s a computer program, it understands where and when it’s running, it understands who’s

02:29:50 talking to it, it understands what its task is, what its goals are, what its current problem

02:29:55 is that it’s working on, it understands how long it’s spent on things, what it’s tried,

02:29:59 it understands what it’s done in the past, and so on.

02:30:06 If we want to call that consciousness, then yes, psych is already conscious.

02:30:11 But I don’t think that I would ascribe anything mystical to that.

02:30:15 Again, some people would, but I would say that other than our own personal experience

02:30:21 of consciousness, we’re just treating everyone else in the world, so to speak, at their word

02:30:27 about being conscious.

02:30:29 And so if a computer program, if an AI is able to exhibit all the same kinds of response

02:30:39 as you would expect of a conscious entity, then doesn’t it deserve the label of consciousness

02:30:46 just as much?

02:30:47 So there’s another burden that comes with this whole intelligence thing that humans

02:30:51 got is the extinguishing of the light of consciousness, which is kind of realizing that we’re going

02:30:59 to be dead someday.

02:31:02 And there’s a bunch of philosophers like Ernest Becker, who kind of think that this realization

02:31:09 of mortality, and then fear, sometimes they call it terror of mortality, is one of the

02:31:18 creative forces behind human condition, like, it’s the thing that drives us.

02:31:24 Do you think it’s important for an AI system?

02:31:27 You know, when Psych proposed that it’s not human, and it’s one of the moderators of

02:31:36 his contents, you know, there’s another question it could ask, which is like, it kind of knows

02:31:43 that humans are mortal, am I mortal?

02:31:47 And I think one really important thing that’s possible when you’re conscious is to fear

02:31:54 the extinguishing of that consciousness, the fear of mortality.

02:31:59 Do you think that’s useful for intelligence, thinking like, I might die, and I really don’t

02:32:04 want to die?

02:32:05 I don’t think so.

02:32:06 I think it may help some humans to be better people.

02:32:12 It may help some humans to be more creative, and so on.

02:32:16 I don’t think it’s necessary for AIs to believe that they have limited lifespans, and therefore

02:32:23 they should make the most of their behavior.

02:32:26 Maybe eventually the answer to that and my answer to that will change, but as of now

02:32:30 I would say that that’s almost like a frill or a side effect that is not, in fact, if

02:32:36 you look at most humans, most humans ignore the fact that they’re going to die most of

02:32:42 the time.

02:32:43 Well, but that’s like, this goes to the white space between the words.

02:32:49 So what Ernest Becker argues is that that ignoring is we’re living in an illusion that

02:32:54 we constructed on the foundation of this terror.

02:32:57 So we’re escape life as we know it, pursuing things, creating things, love, everything

02:33:05 we can think of that’s beautiful about humanity is just trying to escape this realization

02:33:11 that we’re going to die one day.

02:33:13 That’s his idea, and I think, I don’t know if I 100% believe in this, but it certainly

02:33:21 rhymes.

02:33:22 It seems like to me like it rhymes with the truth.

02:33:25 Yeah.

02:33:26 I think that for some people that’s going to be a more powerful factor than others.

02:33:33 Clearly Doug is talking about Russians.

02:33:35 So I’m Russian, so clearly it infiltrates all of Russian literature.

02:33:44 And AI doesn’t have to have fear of death as a motivating force in that we can build

02:33:53 in motivation.

02:33:55 So we can build in the motivation of obeying users and making users happy and making others

02:34:03 happy and so on, and that can substitute for this sort of personal fear of death that sometimes

02:34:12 leads to bursts of creativity in humans.

02:34:16 Yeah, I don’t know.

02:34:18 I think AI really needs to understand death deeply in order to be able to drive a car,

02:34:23 for example.

02:34:24 I think there’s just some, like, there…

02:34:28 No, I really disagree.

02:34:30 I think it needs to understand the value of human life, especially the value of human

02:34:34 life to other humans, and understand that certain things are more important than other

02:34:41 things.

02:34:42 So it has to have a lot of knowledge about ethics and morality and so on.

02:34:48 But some of it is so messy that it’s impossible to encode.

02:34:51 For example, there’s…

02:34:52 I disagree.

02:34:53 So if there’s a person dying right in front of us, most human beings would help that person,

02:34:59 but they would not apply that same ethics to everybody else in the world.

02:35:04 This is the tragedy of how difficult it is to be a doctor, because they know when they

02:35:09 help a dying child, they know that the money they’re spending on this child cannot possibly

02:35:15 be spent on every other child that’s dying.

02:35:18 And that’s a very difficult to encode decision.

02:35:24 Perhaps it is…

02:35:26 Perhaps it could be formalized.

02:35:27 Oh, but I mean, you’re talking about autonomous vehicles, right?

02:35:31 So autonomous vehicles are going to have to make those decisions all the time of, what

02:35:39 is the chance of this bad event happening?

02:35:43 How bad is that compared to this chance of that bad event happening?

02:35:46 And so on.

02:35:47 And when a potential accident is about to happen, is it worth taking this risk?

02:35:52 If I have to make a choice, which of these two cars am I going to hit and why?

02:35:56 See, I was thinking about a very different choice when I’m talking about hero mortality,

02:36:01 which is just observing Manhattan style driving.

02:36:06 I think that humans as an effective driver needs to threaten pedestrians lives a lot.

02:36:14 There’s a dance, I’ve watched pedestrians a lot, I worked on this problem, and it seems

02:36:19 like the, if I could summarize the problem of a pedestrian crossing is the car with this

02:36:27 movement is saying, I’m going to kill you.

02:36:30 And the pedestrian is saying, maybe.

02:36:33 And then they decide and they say, no, I don’t think you have the guts to kill me.

02:36:36 And you walk and they walk in front and they look away.

02:36:39 And there’s that dance, the pedestrian, this is a social contract that the pedestrian trusts

02:36:46 that once they’re in front of the car and the car is sufficiently, from a physics perspective,

02:36:51 able to stop, they’re going to stop.

02:36:53 But the car also has to threaten that pedestrian is like, I’m late for work, so you’re being

02:36:58 kind of an asshole by crossing in front of me.

02:37:01 But life and death is in like, it’s part of the calculation here.

02:37:06 And it’s that equation is being solved millions of times a day.

02:37:11 Yes.

02:37:12 Very effectively, that game theory, whatever that formulation is.

02:37:15 Absolutely.

02:37:16 I just I don’t know if it’s as simple as some formalizable game theory problem.

02:37:22 It could very well be in the case of driving and in the case of most of human society.

02:37:28 I don’t know.

02:37:29 But yeah, you might be right that sort of the fear of death is just one of the quirks

02:37:34 of like the way our brains have evolved, but it’s not a necessary feature of intelligence.

02:37:42 Others certainly are always doing this kind of estimate, even if it’s unconscious, subconscious,

02:37:48 of what are the chances of various bad outcomes happening?

02:37:52 Like for instance, if I don’t wait for this pedestrian or something like that, and what

02:37:59 is the downside to me going to be in terms of time wasted talking to the police or getting

02:38:07 sent to jail or things like that?

02:38:11 And there’s also emotion, like people in their cars tend to get irrationally angry.

02:38:17 That’s dangerous.

02:38:18 But, you know, think about this is all part of why I think that autonomous vehicles, truly

02:38:24 autonomous vehicles are farther out than most people do, because there is this enormous

02:38:31 level of complexity which goes beyond mechanically controlling the car.

02:38:38 And I can see the autonomous vehicles as a kind of metaphorical and literal accident

02:38:45 waiting to happen.

02:38:47 And not just because of their overall incurring versus preventing accidents and so on, but

02:38:56 just because of the almost voracious appetite people have for bad stories about powerful

02:39:10 companies and powerful entities.

02:39:12 When I was at a, coincidentally, Japanese fifth generation computing system conference

02:39:19 in 1987, while I happened to be there, there was a worker at an auto plant who was despondent

02:39:26 and committed suicide by climbing under the safety chains and so on and getting stamped

02:39:30 to death by a machine.

02:39:32 And instead of being a small story that said despondent worker commit suicide, it was front

02:39:38 page news that effectively said robot kills worker, because the public is just waiting

02:39:46 for stories about like AI kills phonogenic family of five type stories.

02:39:54 And even if you could show that nationwide, this system saved more lives than it cost

02:40:01 and prevented more injuries than it caused and so on, the media, the public, the government

02:40:09 is just coiled and ready to pounce on stories where in fact it failed, even if they’re relatively

02:40:18 few.

02:40:19 Yeah, it’s so fascinating to watch us humans resisting the cutting edge of science and

02:40:26 technology and almost like hoping for it to fail and constant, you know, this just happens

02:40:31 over and over and over throughout history.

02:40:33 Or even if we’re not hoping for it to fail, we’re fascinated by it.

02:40:37 And in terms of what we find interesting, the one in a thousand failures, much more

02:40:43 interesting than the 999 boring successes.

02:40:48 So once we build an AGI system, say psych is some part of it and say it’s very possible

02:40:57 that you would be one of the first people that can sit down in the room, let’s say with

02:41:03 her and have a conversation, what would you ask her?

02:41:07 What would you talk about?

02:41:09 Looking at all of the content out there on the web and so on, what are some possible

02:41:26 solutions to big problems that the world has that people haven’t really thought of before

02:41:33 that are not being properly or at least adequately pursued?

02:41:40 What are some novel solutions that you can think of that we haven’t that might work and

02:41:46 that might be worth considering?

02:41:48 So that is a damn good question.

02:41:51 Given that the AGI is going to be somewhat different from human intelligence, it’s still

02:41:56 going to make some mistakes that we wouldn’t make, but it’s also possibly going to notice

02:42:02 some blind spots we have.

02:42:04 And I would love as a test of is it really on a par with our intelligences, can it help

02:42:13 spot some of the blind spots that we have?

02:42:17 So the two part question of can you help identify what are the big problems in the world?

02:42:23 And two, what are some novel solutions to those problems?

02:42:27 That are not being talked about by anyone.

02:42:31 And some of those may become infeasible or reprehensible or something, but some of them

02:42:37 might be actually great things to look at.

02:42:40 If you go back and look at some of the most powerful discoveries that have been made,

02:42:45 like relativity and superconductivity and so on, a lot of them were cases where someone

02:42:56 took seriously the idea that there might actually be a non obvious answer to a question.

02:43:04 So in Einstein’s case, it was, yeah, the Lorentz transformation is known.

02:43:09 Nobody believes that it’s actually the way reality works.

02:43:12 What if it were the way that reality actually worked?

02:43:15 So a lot of people don’t realize he didn’t actually work out that equation, he just sort

02:43:19 of took it seriously.

02:43:20 Or in the case of superconductivity, you have this V equals IR equation where R is resistance

02:43:26 and so on.

02:43:28 And it was being mapped at lower and lower temperatures, but everyone thought that was

02:43:33 just bump on a log research to show that V equals IR always held.

02:43:39 And then when some graduate student got to a slightly lower temperature and showed that

02:43:46 resistance suddenly dropped off, everyone just assumed that they did it wrong.

02:43:50 And it was only a little while later that they realized it was actually a new phenomenon.

02:43:56 Or in the case of the H. pylori bacteria causing stomach ulcers, where everyone thought that

02:44:04 stress and stomach acid caused ulcers.

02:44:08 And when a doctor in Australia claimed it was actually a bacterial infection, he couldn’t

02:44:15 get anyone seriously to listen to him and he had to ultimately inject himself with the

02:44:21 bacteria to show that he suddenly developed a life threatening ulcer in order to get other

02:44:27 doctors to seriously consider that.

02:44:29 So there are all sorts of things where humans are locked into paradigms, what Thomas Kuhn

02:44:35 called paradigms, and we can’t get out of them very easily.

02:44:40 So a lot of AI is locked into the deep learning machine learning paradigm right now.

02:44:47 And almost all of us and almost all sciences are locked into current paradigms.

02:44:52 And Kuhn’s point was pretty much you have to wait for people to die in order for the

02:45:00 new generation to escape those paradigms.

02:45:03 And I think that one of the things that would change that sad reality is if we had trusted

02:45:09 AGI’s that could help take a step back and question some of the paradigms that we’re

02:45:16 currently locked into.

02:45:17 Yeah, it would accelerate the paradigm shifts in human science and progress.

02:45:25 You’ve lived a very interesting life where you thought about big ideas and you stuck

02:45:30 with them.

02:45:32 Can you give advice to young people today, somebody in high school, somebody undergrad,

02:45:38 about career, about life?

02:45:43 I’d say you can make a difference.

02:45:47 But in order to make a difference, you’re going to have to have the courage to follow

02:45:53 through with ideas which other people might not immediately understand or support.

02:46:02 You have to realize that if you make some plan that’s going to take an extended period

02:46:12 of time to carry out, don’t be afraid of that.

02:46:16 That’s true of physical training of your body.

02:46:20 That’s true of learning some profession.

02:46:27 It’s also true of innovation, that some innovations are not great ideas you can write down on

02:46:33 a napkin and become an instant success if you turn out to be right.

02:46:38 Some of them are paths you have to follow, but remember that you’re mortal.

02:46:45 Remember that you have a limited number of decade sized debts to make with your life

02:46:53 and you should make each one of them count.

02:46:55 And that’s true in personal relationships.

02:46:58 That’s true in career choice.

02:47:00 That’s true in making discoveries and so on.

02:47:03 And if you follow the path of least resistance, you’ll find that you’re optimizing for short

02:47:10 periods of time.

02:47:12 And before you know it, you turn around and long periods of time have gone by without

02:47:17 you ever really making a difference in the world.

02:47:21 When you look at the field that I really love is artificial intelligence and there’s not

02:47:26 many projects, there’s not many little flames of hope that have been carried out for many

02:47:33 years, for decades and psych represents one of them.

02:47:36 And I mean that in itself is just a really inspiring thing.

02:47:42 So I’m deeply grateful that you would be carrying that flame for so many years and I think that’s

02:47:47 an inspiration to young people.

02:47:50 That said, you said life is finite and we talked about mortality as a feature of AGI.

02:47:55 Do you think about your own mortality?

02:47:57 Are you afraid of death?

02:47:59 Sure, I’d be crazy if I weren’t.

02:48:03 And as I get older, I’m now over 70.

02:48:07 So as I get older, it’s more on my mind, especially as acquaintances and friends and especially

02:48:14 mentors, one by one are dying, so I can’t avoid thinking about mortality.

02:48:22 And I think that the good news from the point of view and the rest of the world is that

02:48:28 that adds impetus to my need to succeed in a small number of years in the future.

02:48:34 You have a deadline.

02:48:36 Exactly.

02:48:37 I’m not going to have another 37 years to continue working on this.

02:48:41 So we really do want Psyche to make an impact in the world commercially, physically, metaphysically

02:48:50 in the next small number of years, two, three, five years, not two, three, five decades anymore.

02:48:56 And so this is really driving me toward this sort of commercialization and increasingly

02:49:04 widespread application of Psyche.

02:49:08 Whereas before, I felt that I could just sort of sit back, roll my eyes, wait till the world

02:49:13 caught up.

02:49:14 And now I don’t feel that way anymore.

02:49:16 I feel like I need to put in some effort to make the world aware of what we have and what

02:49:22 it can do.

02:49:23 And the good news from your point of view is that that’s why I’m sitting here.

02:49:27 You’re going to be more productive.

02:49:30 I love it.

02:49:31 And if I can help in any way, I would love to.

02:49:34 From a programmer perspective, I love, especially these days, just contributing in small and

02:49:41 big ways.

02:49:42 So if there’s any open sourcing from an MIT side and the research, I would love to help.

02:49:48 But bigger than Psyche, like I said, it’s that little flame that you’re carrying of

02:49:53 artificial intelligence, the big dream.

02:49:58 What do you hope your legacy is?

02:50:02 That’s a good question.

02:50:05 People think of me as one of the pioneers or inventors of the AI that is ubiquitous

02:50:15 and that they take for granted and so on.

02:50:19 Much the way that today we look back on the pioneers of electricity or the pioneers of

02:50:28 similar types of technologies and so on.

02:50:33 It’s hard to imagine what life would be like if these people hadn’t done what they did.

02:50:40 So that’s one thing that I’d like to be remembered as.

02:50:44 Another is that the creator, one of the originators of this gigantic knowledge store and acquisition

02:50:53 system that is likely to be at the center of whatever this future AI thing will look

02:51:00 like.

02:51:01 Yes, exactly.

02:51:02 And I’d also like to be remembered as someone who wasn’t afraid to spend several decades

02:51:11 on a project in a time when almost all of the other forces, institutional forces and

02:51:23 commercial forces, are incenting people to go for short term rewards.

02:51:29 And a lot of people gave up.

02:51:31 A lot of people that dreamt the same dream as you gave up and you didn’t.

02:51:40 I mean, Doug, it’s truly an honor.

02:51:42 This is a long time coming.

02:51:45 A lot of people bring up your work specifically and more broadly, philosophically of this

02:51:53 is the dream of artificial intelligence.

02:51:55 This is likely a part of the future.

02:51:57 We’re so sort of focused on machine learning applications, all that kind of stuff today.

02:52:02 But it seems like the ideas that Cite carries forward is something that will be at the center

02:52:08 of this problem they’re all trying to solve, which is the problem of intelligence, emotional

02:52:15 and otherwise.

02:52:16 So thank you so much.

02:52:18 It’s such a huge honor that you would talk to me and spend your valuable time with me

02:52:22 today.

02:52:23 Thanks for talking.

02:52:24 Thanks, Lex.

02:52:25 It’s been great.

02:52:26 Thanks for listening to this conversation with Doug Lenat.

02:52:29 To support this podcast, please check out our sponsors in the description.

02:52:33 And now, let me leave you with some words from Mark Twain about the nature of truth.

02:52:40 If you tell the truth, you don’t have to remember anything.

02:52:44 Thank you for listening and hope to see you next time.