Pamela McCorduck: Machines Who Think and the Early Days of AI #34

Transcript

00:00:00 The following is a conversation with Pamela McCordick. She’s an author who has written on

00:00:04 the history and the philosophical significance of artificial intelligence. Her books include

00:00:10 Machines Who Think in 1979, The Fifth Generation in 1983 with Ed Feigenbaum, who’s considered to

00:00:18 be the father of expert systems, The Edge of Chaos that features women, and many more books.

00:00:24 I came across her work in an unusual way by stumbling in a quote from Machines Who Think

00:00:29 that is something like, artificial intelligence began with the ancient wish to forge the gods.

00:00:37 That was a beautiful way to draw a connecting line between our societal relationship with AI

00:00:42 from the grounded day to day science, math and engineering, to popular stories and science

00:00:48 fiction and myths of automatons that go back for centuries. Through her literary work,

00:00:54 she has spent a lot of time with the seminal figures of artificial intelligence, including

00:01:00 the founding fathers of AI from the 1956 Dartmouth summer workshop where the field was launched.

00:01:08 I reached out to Pamela for a conversation in hopes of getting a sense of what those early

00:01:13 days were like, and how their dreams continue to reverberate through the work of our community

00:01:19 today. I often don’t know where the conversation may take us, but I jump in and see. Having no

00:01:25 constraints, rules, or goals is a wonderful way to discover new ideas. This is the Artificial

00:01:31 Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes,

00:01:37 support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M

00:01:44 A N. And now, here’s my conversation with Pamela McCordick. In 1979, your book Machines Who Think

00:01:55 was published. In it, you interview some of the early AI pioneers and explore the idea that

00:02:00 AI was born not out of maybe math and computer science, but out of myth and legend. So, tell me

00:02:10 if you could the story of how you first arrived at the book, the journey of beginning to write it.

00:02:19 I had been a novelist. I’d published two novels, and I was sitting under the portal at Stanford

00:02:29 one day, the house we were renting for the summer. And I thought, I should write a novel about these

00:02:33 weird people in AI, I know. And then I thought, ah, don’t write a novel, write a history. Simple.

00:02:41 Just go around, interview them, splice it together, voila, instant book. Ha, ha, ha. It was

00:02:48 much harder than that. But nobody else was doing it. And so, I thought, well, this is a great

00:02:54 opportunity. And there were people who, John McCarthy, for example, thought it was a nutty

00:03:03 idea. The field had not evolved yet, so on. And he had some mathematical thing he thought I should

00:03:11 write instead. And I said, no, John, I am not a woman in search of a project. This is what I want

00:03:17 to do. I hope you’ll cooperate. And he said, oh, mutter, mutter, well, okay, it’s your time.

00:03:24 What was the pitch for the, I mean, such a young field at that point. How do you write

00:03:30 a personal history of a field that’s so young? I said, this is wonderful. The founders of the

00:03:37 field are alive and kicking and able to talk about what they’re doing. Did they sound or feel like

00:03:42 founders at the time? Did they know that they have founded something?

00:03:48 Oh, yeah. They knew what they were doing was very important. Very. What I now see in retrospect

00:03:56 is that they were at the height of their research careers. And it’s humbling to me that they took

00:04:04 time out from all the things that they had to do as a consequence of being there. And to talk to

00:04:11 this woman who said, I think I’m going to write a book about you. No, it was amazing. Just amazing.

00:04:17 So who stands out to you? Maybe looking 63 years ago, the Dartmouth conference,

00:04:26 so Marvin Minsky was there, McCarthy was there, Claude Shannon, Alan Newell, Herb Simon,

00:04:32 some of the folks you’ve mentioned. Then there’s other characters, right? One of your coauthors

00:04:40 He wasn’t at Dartmouth.

00:04:43 He wasn’t at Dartmouth.

00:04:43 No. He was, I think, an undergraduate then.

00:04:47 And of course, Joe Traub. All of these are players, not at Dartmouth, but in that era.

00:04:56 Right.

00:04:57 CMU and so on. So who are the characters, if you could paint a picture, that stand out to you

00:05:02 from memory? Those people you’ve interviewed and maybe not, people that were just in the

00:05:08 In the atmosphere.

00:05:09 In the atmosphere.

00:05:11 Of course, the four founding fathers were extraordinary guys. They really were.

00:05:15 Who are the founding fathers?

00:05:18 Alan Newell, Herbert Simon, Marvin Minsky, John McCarthy. They were the four who were not only

00:05:24 at the Dartmouth conference, but Newell and Simon arrived there with a working program

00:05:29 called The Logic Theorist. Everybody else had great ideas about how they might do it, but

00:05:34 But they weren’t going to do it yet.

00:05:41 And you mentioned Joe Traub, my husband. I was immersed in AI before I met Joe

00:05:50 because I had been Ed Feigenbaum’s assistant at Stanford. And before that,

00:05:55 I had worked on a book edited by Feigenbaum and Julian Feldman called Computers and Thought.

00:06:04 It was the first textbook of readings of AI. And they only did it because they were trying to teach

00:06:10 AI to people at Berkeley. And there was nothing, you’d have to send them to this journal and that

00:06:15 journal. This was not the internet where you could go look at an article. So I was fascinated from

00:06:22 the get go by AI. I was an English major. What did I know? And yet I was fascinated. And that’s

00:06:30 why you saw that historical, that literary background, which I think is very much a part

00:06:38 of the continuum of AI, that AI grew out of that same impulse. That traditional, what was,

00:06:47 what drew you to AI? How did you even think of it back then? What was the possibilities,

00:06:54 the dreams? What was interesting to you? The idea of intelligence outside the human cranium,

00:07:03 this was a phenomenal idea. And even when I finished Machines Who Think,

00:07:08 I didn’t know if they were going to succeed. In fact, the final chapter is very wishy washy,

00:07:15 frankly. Succeed, the field did. Yeah. So was there the idea that AI began with the wish to

00:07:25 forge the gods? So the spiritual component that we crave to create this other thing greater than

00:07:33 ourselves. For those guys, I don’t think so. Newell and Simon were cognitive psychologists.

00:07:42 What they wanted was to simulate aspects of human intelligence,

00:07:49 and they found they could do it on the computer. Minsky just thought it was a really cool thing

00:07:57 to do. Likewise, McCarthy. McCarthy had got the idea in 1949 when he was a Caltech student.

00:08:06 And he listened to somebody’s lecture. It’s in my book. I forget who it was. And he thought,

00:08:15 oh, that would be fun to do. How do we do that? And he took a very mathematical approach.

00:08:21 Minsky was hybrid, and Newell and Simon were very much cognitive psychology. How can we simulate

00:08:29 various things about human cognition? What happened over the many years is, of course,

00:08:37 our definition of intelligence expanded tremendously. These days, biologists are

00:08:44 comfortable talking about the intelligence of the cell, the intelligence of the brain,

00:08:49 not just human brain, but the intelligence of any kind of brain. Cephalopods, I mean, an octopus is

00:09:00 really intelligent by any amount. We wouldn’t have thought of that in the 60s, even the 70s.

00:09:06 So all these things have worked in. And I did hear one behavioral primatologist, Franz De Waal,

00:09:16 say, AI taught us the questions to ask. Yeah, this is what happens, right? When you try to build it,

00:09:26 is when you start to actually ask questions. It puts a mirror to ourselves. Yeah, right. So you

00:09:32 were there in the middle of it. It seems like not many people were asking the questions that

00:09:38 you were, or just trying to look at this field the way you were. I was so low. When I went to

00:09:45 get funding for this because I needed somebody to transcribe the interviews and I needed travel

00:09:53 expenses, I went to everything you could think of, the NSF, the DARPA. There was an Air Force

00:10:07 place that doled out money. And each of them said, well, that’s a very interesting idea.

00:10:15 But we’ll think about it. And the National Science Foundation actually said to me in plain English,

00:10:23 hey, you’re only a writer. You’re not a historian of science. And I said, yeah, that’s true. But

00:10:30 the historians of science will be crawling all over this field. I’m writing for the general

00:10:35 audience, so I thought. And they still wouldn’t budge. I finally got a private grant without

00:10:43 knowing who it was from, from Ed Fredkin at MIT. He was a wealthy man, and he liked what he called

00:10:51 crackpot ideas. And he considered this a crackpot idea, and he was willing to support it. I am ever

00:10:58 grateful, let me say that. Some would say that a history of science approach to AI, or even just a

00:11:06 history, or anything like the book that you’ve written, hasn’t been written since. Maybe I’m

00:11:13 not familiar, but it’s certainly not many. If we think about bigger than just these couple of

00:11:20 decades, few decades, what are the roots of AI? Oh, they go back so far. Yes, of course, there’s

00:11:30 all the legendary stuff, the Golem and the early robots of the 20th century. But they go back much

00:11:41 further than that. If you read Homer, Homer has robots in the Iliad. And a classical scholar was

00:11:49 pointing out to me just a few months ago, well, you said you just read the Odyssey. The Odyssey

00:11:54 is full of robots. It is, I said? Yeah. How do you think Odysseus’s ship gets from one place to

00:12:00 another? He doesn’t have the crew people to do that, the crewmen. Yeah, it’s magic. It’s robots.

00:12:07 Oh, I thought, how interesting. So we’ve had this notion of AI for a long time. And then toward the

00:12:17 end of the 19th century, the beginning of the 20th century, there were scientists who actually

00:12:23 tried to make this happen some way or another, not successfully. They didn’t have the technology for

00:12:29 it. And of course, Babbage in the 1850s and 60s, he saw that what he was building was capable of

00:12:40 intelligent behavior. And when he ran out of funding, the British government finally said,

00:12:47 that’s enough. He and Lady Lovelace decided, oh, well, why don’t we play the ponies with this? He

00:12:55 had other ideas for raising money too. But if we actually reach back once again, I think people

00:13:02 don’t actually really know that robots do appear and ideas of robots. You talk about the Hellenic

00:13:09 and the Hebraic points of view. Oh, yes. Can you tell me about each? I defined it this way. The

00:13:16 Hellenic point of view is robots are great. They are party help. They help this guy Hephaestus,

00:13:25 this god Hephaestus in his forge. I presume he made them to help him and so on and so forth.

00:13:32 And they welcome the whole idea of robots. The Hebraic view has to do with, I think it’s the

00:13:40 second commandment, thou shalt not make any graven image. In other words, you better not

00:13:47 start imitating humans because that’s just forbidden. It’s the second commandment. And

00:13:55 a lot of the reaction to artificial intelligence has been a sense that this is somehow wicked,

00:14:08 this is somehow blasphemous. We shouldn’t be going there. Now, you can say, yeah, but there are going

00:14:17 to be some downsides. And I say, yes, there are, but blasphemy is not one of them.

00:14:21 You know, there is a kind of fear that feels to be almost primal. Is there religious roots to that?

00:14:29 Because so much of our society has religious roots. And so there is a feeling of, like you

00:14:36 said, blasphemy of creating the other, of creating something, you know, it doesn’t have to be

00:14:43 artificial intelligence. It’s creating life in general. It’s the Frankenstein idea.

00:14:48 There’s the annotated Frankenstein on my coffee table. It’s a tremendous novel. It really is just

00:14:56 beautifully perceptive. Yes, we do fear this and we have good reason to fear it,

00:15:03 but because it can get out of hand. Maybe you can speak to that fear,

00:15:08 the psychology, if you’ve thought about it. You know, there’s a practical set of fears,

00:15:12 concerns in the short term. You can think if we actually think about artificial intelligence

00:15:17 systems, you can think about bias of discrimination in algorithms. You can think about their social

00:15:29 networks have algorithms that recommend the content you see, thereby these algorithms control

00:15:35 the behavior of the masses. There’s these concerns. But to me, it feels like the fear

00:15:40 that people have is deeper than that. So have you thought about the psychology of it?

00:15:46 I think in a superficial way I have. There is this notion that if we produce a machine that

00:15:57 can think, it will outthink us and therefore replace us.

00:16:01 I guess that’s a primal fear of almost kind of a kind of mortality. So around the time you said

00:16:11 you worked at Stanford with Ed Feigenbaum. So let’s look at that one person. Throughout his

00:16:21 history, clearly a key person, one of the many in the history of AI. How has he changed in general

00:16:31 around him? How has Stanford changed in the last, how many years are we talking about here?

00:16:36 Oh, since 65.

00:16:38 65. So maybe it doesn’t have to be about him. It could be bigger. But because he was a key

00:16:45 person in expert systems, for example, how is that, how are these folks who you’ve interviewed in the

00:16:54 70s, 79 changed through the decades?

00:16:58 In Ed’s case, I know him well. We are dear friends. We see each other every month or so. He told me

00:17:12 that when Machines Who Think first came out, he really thought all the front matter was kind of

00:17:17 bologna. And 10 years later, he said, no, I see what you’re getting at. Yes, this is an impulse

00:17:27 that has been a human impulse for thousands of years to create something outside the human

00:17:34 cranium that has intelligence. I think it’s very hard when you’re down at the algorithmic level,

00:17:46 and you’re just trying to make something work, which is hard enough to step back and think of

00:17:53 the big picture. It reminds me of when I was in Santa Fe, I knew a lot of archaeologists,

00:17:59 which was a hobby of mine. And I would say, yeah, yeah, well, you can look at the shards and say,

00:18:07 oh, this came from this tribe and this came from this trade route and so on. But what about the big

00:18:14 picture? And a very distinguished archaeologist said to me, they don’t think that way. No,

00:18:21 they’re trying to match the shard to where it came from. Where did the remainder of this corn

00:18:30 come from? Was it grown here? Was it grown elsewhere? And I think this is part of any

00:18:37 scientific field. You’re so busy doing the hard work, and it is hard work, that you don’t step

00:18:46 back and say, oh, well, now let’s talk about the general meaning of all this. Yes.

00:18:53 So none of the even Minsky and McCarthy, they…

00:18:58 Oh, those guys did. Yeah. The founding fathers did.

00:19:01 Early on or later?

00:19:03 Pretty early on. But in a different way from how I looked at it. The two cognitive psychologists,

00:19:11 Newell and Simon, they wanted to imagine reforming cognitive psychology so that we would really,

00:19:20 really understand the brain. Minsky was more speculative. And John McCarthy saw it as,

00:19:32 I think I’m doing him right by this, he really saw it as a great boon for human beings to have

00:19:40 this technology. And that was reason enough to do it. And he had wonderful, wonderful

00:19:48 fables about how if you do the mathematics, you will see that these things are really good for

00:19:56 human beings. And if you had a technological objection, he had an answer, a technological

00:20:03 answer. But here’s how we could get over that and then blah, blah, blah. And one of his favorite things

00:20:10 was what he called the literary problem, which of course he presented to me several times.

00:20:16 That is everything in literature, there are conventions in literature. One of the conventions

00:20:23 is that you have a villain and a hero. And the hero in most literature is human,

00:20:36 and the villain in most literature is a machine. And he said, that’s just not the way it’s going

00:20:41 to be. But that’s the way we’re used to it. So when we tell stories about AI, it’s always

00:20:47 with this paradigm. I thought, yeah, he’s right. Looking back, the classics RUR is certainly the

00:20:57 machines trying to overthrow the humans. Frankenstein is different. Frankenstein is

00:21:06 a creature. He never has a name. Frankenstein, of course, is the guy who created him, the human,

00:21:13 Dr. Frankenstein. This creature wants to be loved, wants to be accepted. And it is only when

00:21:22 Frankenstein turns his head, in fact, runs the other way. And the creature is without love,

00:21:34 that he becomes the monster that he later becomes.

00:21:38 So who’s the villain in Frankenstein? It’s unclear, right?

00:21:43 Oh, it is unclear, yeah.

00:21:45 It’s really the people who drive him. By driving him away, they bring out the worst.

00:21:54 That’s right. They give him no human solace. And he is driven away, you’re right.

00:22:00 He becomes, at one point, the friend of a blind man. And he serves this blind man,

00:22:08 and they become very friendly. But when the sighted people of the blind man’s family come in,

00:22:14 ah, you’ve got a monster here. So it’s very didactic in its way. And what I didn’t know

00:22:23 is that Mary Shelley and Percy Shelley were great readers of the literature surrounding abolition

00:22:31 in the United States, the abolition of slavery. And they picked that up wholesale. You are making

00:22:38 monsters of these people because you won’t give them the respect and love that they deserve.

00:22:44 Do you have, if we get philosophical for a second, do you worry that once we create

00:22:52 machines that are a little bit more intelligent, let’s look at Roomba, the vacuums, the cleaner,

00:22:58 that this darker part of human nature where we abuse the other, the somebody who’s different,

00:23:08 will come out?

00:23:09 I don’t worry about it. I could imagine it happening. But I think that what AI has to offer

00:23:18 the human race will be so attractive that people will be won over.

00:23:25 So you have looked deep into these people, had deep conversations, and it’s interesting to get

00:23:32 a sense of stories of the way they were thinking and the way it was changed, the way your own

00:23:42 thinking about AI has changed. So you mentioned McCarthy. What about the years at CMU, Carnegie

00:23:51 Mellon, with Joe? Sure. Joe was not in AI. He was in algorithmic complexity.

00:24:03 Was there always a line between AI and computer science, for example?

00:24:07 Is AI its own place of outcasts? Was that the feeling?

00:24:10 There was a kind of outcast period for AI. For instance, in 1974, the new field was hardly 10

00:24:24 years old. The new field of computer science was asked by the National Science Foundation,

00:24:31 I believe, but it may have been the National Academies, I can’t remember,

00:24:34 to tell your fellow scientists where computer science is and what it means.

00:24:44 And they wanted to leave out AI. And they only agreed to put it in because Don Knuth said,

00:24:53 hey, this is important. You can’t just leave that out.

00:24:57 Really? Don, dude?

00:24:58 Don Knuth, yes.

00:24:59 I talked to him recently, too. Out of all the people.

00:25:02 Yes. But you see, an AI person couldn’t have made that argument. He wouldn’t have been believed.

00:25:08 But Knuth was believed. Yes.

00:25:10 So Joe Traub worked on the real stuff.

00:25:15 Joe was working on algorithmic complexity. But he would say in plain English again and again,

00:25:22 the smartest people I know are in AI.

00:25:24 Really?

00:25:25 Oh, yes. No question. Anyway, Joe loved these guys. What happened was that I guess it was

00:25:35 as I started to write Machines Who Think, Herb Simon and I became very close friends.

00:25:41 He would walk past our house on Northumberland Street every day after work. And I would just

00:25:47 be putting my cover on my typewriter. And I would lean out the door and say,

00:25:52 Herb, would you like a sherry? And Herb almost always would like a sherry. So he’d stop in

00:25:59 and we’d talk for an hour, two hours. My journal says we talked this afternoon for three hours.

00:26:06 What was on his mind at the time in terms of on the AI side of things?

00:26:11 Oh, we didn’t talk too much about AI. We talked about other things.

00:26:14 Just life.

00:26:15 We both love literature. And Herb had read Proust in the original French twice all the

00:26:24 way through. I can’t. I’ve read it in English in translation. So we talked about literature.

00:26:30 We talked about languages. We talked about music because he loved music. We talked about

00:26:36 art because he was actually enough of a painter that he had to give it up because he was afraid

00:26:44 it was interfering with his research and so on. So no, it was really just chat, chat.

00:26:51 But it was very warm. So one summer I said to Herb, my students have all the really

00:26:59 interesting conversations. I was teaching at the University of Pittsburgh then in the English

00:27:03 department. They get to talk about the meaning of life and that kind of thing. And what do I have?

00:27:09 I have university meetings where we talk about the photocopying budget and whether the course

00:27:17 on romantic poetry should be one semester or two. So Herb laughed. He said, yes, I know what you

00:27:23 mean. He said, but you could do something about that. Dot, that was his wife, Dot and I used to

00:27:30 have a salon at the University of Chicago every Sunday night. And we would have essentially an

00:27:38 open house and people knew. It wasn’t for a small talk. It was really for some topic of

00:27:47 depth. He said, but my advice would be that you choose the topic ahead of time. Fine, I said.

00:27:54 So we exchanged mail over the summer. That was US Post in those days because

00:28:01 you didn’t have personal email. And I decided I would organize it and there would be eight of us,

00:28:12 Alan Noland, his wife, Herb Simon and his wife Dorothea. There was a novelist in town,

00:28:21 a man named Mark Harris. He had just arrived and his wife Josephine. Mark was most famous then for

00:28:29 a novel called Bang the Drum Slowly, which was about baseball. And Joe and me, so eight people.

00:28:36 And we met monthly and we just sank our teeth into really hard topics and it was great fun.

00:28:45 TK How have your own views around artificial intelligence changed

00:28:53 through the process of writing Machines Who Think and afterwards, the ripple effects?

00:28:57 RL I was a little skeptical that this whole thing would work out. It didn’t matter. To me,

00:29:04 it was so audacious. AI generally. And in some ways, it hasn’t worked out the way I expected

00:29:16 so far. That is to say, there’s this wonderful lot of apps, thanks to deep learning and so on.

00:29:26 But those are algorithmic. And in the part of symbolic processing, there’s very little yet.

00:29:39 And that’s a field that lies waiting for industrious graduate students.

00:29:45 TK Maybe you can tell me some figures that popped up in your life in the 80s with expert systems

00:29:53 where there was the symbolic AI possibilities of what most people think of as AI,

00:30:00 if you dream of the possibilities of AI, it’s really expert systems. And those hit a few walls

00:30:07 and there was challenges there. And I think, yes, they will reemerge again with some new

00:30:12 breakthroughs and so on. But what did that feel like, both the possibility and the winter that

00:30:17 followed the slowdown in research? BG Ah, you know, this whole thing about AI winter is to me

00:30:25 a crock. TK Snow winters.

00:30:26 BG Because I look at the basic research that was being done in the 80s, which is supposed to be,

00:30:34 my God, it was really important. It was laying down things that nobody had thought about before,

00:30:40 but it was basic research. You couldn’t monetize it. Hence the winter.

00:30:44 TK That’s the winter. BG You know, research,

00:30:49 scientific research goes and fits and starts. It isn’t this nice smooth,

00:30:54 oh, this follows this follows this. No, it just doesn’t work that way.

00:30:59 TK The interesting thing, the way winters happen, it’s never the fault of the researchers.

00:31:05 It’s the some source of hype over promising. Well, no, let me take that back. Sometimes it

00:31:12 is the fault of the researchers. Sometimes certain researchers might over promise the

00:31:17 possibilities. They themselves believe that we’re just a few years away. Sort of just recently

00:31:23 talked to Elon Musk and he believes he’ll have an autonomous vehicle, will have autonomous vehicles

00:31:28 in a year. And he believes it. BG A year?

00:31:30 TK A year. Yeah. With mass deployment of a time.

00:31:33 BG For the record, this is 2019 right now. So he’s talking 2020.

00:31:38 TK To do the impossible, you really have to believe it. And I think what’s going to happen

00:31:44 when you believe it, because there’s a lot of really brilliant people around him,

00:31:48 is some good stuff will come out of it. Some unexpected brilliant breakthroughs will come out

00:31:53 of it when you really believe it, when you work that hard. BG I believe that. And I believe

00:31:58 autonomous vehicles will come. I just don’t believe it’ll be in a year. I wish.

00:32:02 TK But nevertheless, there’s, autonomous vehicles is a good example. There’s a feeling

00:32:09 many companies have promised by 2021, by 2022, Ford, GM, basically every single automotive

00:32:16 company has promised they’ll have autonomous vehicles. So that kind of over promise is what

00:32:21 leads to the winter. Because we’ll come to those dates, there won’t be autonomous vehicles.

00:32:26 BG And there’ll be a feeling, well, wait a minute, if we took your word at that time,

00:32:32 that means we just spent billions of dollars, had made no money, and there’s a counter response to

00:32:39 where everybody gives up on it. Sort of intellectually, at every level, the hope just

00:32:46 dies. And all that’s left is a few basic researchers. So you’re uncomfortable with

00:32:52 some aspects of this idea. TK Well, it’s the difference between science and commerce.

00:32:58 BG So you think science goes on the way it does?

00:33:04 TK Oh, science can really be killed by not getting proper funding or timely funding.

00:33:14 I think Great Britain was a perfect example of that. The Lighthill report in,

00:33:19 I can’t remember the year, essentially said, there’s no use Great Britain putting any money

00:33:26 into this, it’s going nowhere. And this was all about social factions in Great Britain.

00:33:37 Edinburgh hated Cambridge and Cambridge hated Manchester. Somebody else can write that story.

00:33:44 But it really did have a hard effect on research there. Now, they’ve come roaring back with Deep

00:33:54 Mind. But that’s one guy and his visionaries around him. BG But just to push on that,

00:34:03 it’s kind of interesting. You have this dislike of the idea of an AI winter.

00:34:08 Where’s that coming from? Where were you? TK Oh, because I just don’t think it’s true.

00:34:15 BG There was a particular period of time. It’s a romantic notion, certainly.

00:34:21 TK Yeah, well. No, I admire science, perhaps more than I admire commerce. Commerce is fine. Hey,

00:34:33 you know, we all gotta live. But science has a much longer view than commerce and continues

00:34:46 almost regardless. It can’t continue totally regardless, but almost regardless of what’s

00:34:56 saleable and what’s not, what’s monetizable and what’s not. BG So the winter is just something

00:35:01 that happens on the commerce side, and the science marches. That’s a beautifully optimistic

00:35:10 and inspiring message. I agree with you. I think if we look at the key people that work in AI,

00:35:16 that work in key scientists in most disciplines, they continue working out of the love for science.

00:35:22 You can always scrape up some funding to stay alive, and they continue working diligently.

00:35:31 But there certainly is a huge amount of funding now, and there’s a concern on the AI side and

00:35:38 deep learning. There’s a concern that we might, with over promising, hit another slowdown in

00:35:44 funding, which does affect the number of students, you know, that kind of thing.

00:35:47 RG Yeah, it does. BG So the kind of ideas you had in Machines Who Think,

00:35:52 did you continue that curiosity through the decades that followed?

00:35:56 RG Yes, I did. BG And what was your view, historical view of how AI community evolved,

00:36:03 the conversations about it, the work? Has it persisted the same way from its birth?

00:36:09 RG No, of course not. It’s just as we were just talking, the symbolic AI really kind of dried up

00:36:19 and it all became algorithmic. I remember a young AI student telling me what he was doing,

00:36:27 and I had been away from the field long enough. I’d gotten involved with complexity at the Santa

00:36:33 Fe Institute. I thought, algorithms, yeah, they’re in the service of, but they’re not the main event.

00:36:41 No, they became the main event. That surprised me. And we all know the downside of this. We all

00:36:49 know that if you’re using an algorithm to make decisions based on a gazillion human decisions,

00:36:58 baked into it are all the mistakes that humans make, the bigotries, the short sightedness,

00:37:05 and so on and so on. BG So you mentioned Santa Fe Institute. So you’ve written the novel

00:37:13 Edge of Chaos, but it’s inspired by the ideas of complexity, a lot of which have been extensively

00:37:20 explored at the Santa Fe Institute. It’s another fascinating topic, just sort of emergent

00:37:31 complexity from chaos. Nobody knows how it happens really, but it seems to where all the interesting

00:37:37 stuff does happen. So how did first, not your novel, but just complexity in general and the

00:37:44 work at Santa Fe, fit into the bigger puzzle of the history of AI? Or maybe even your personal

00:37:51 journey through that? RG One of the last projects I did

00:37:57 concerning AI in particular was looking at the work of Harold Cohen, the painter. And Harold was

00:38:06 deeply involved with AI. He was a painter first. And what his project, ARIN, which was a lifelong

00:38:17 project, did was reflect his own cognitive processes. Okay. Harold and I, even though I wrote

00:38:30 a book about it, we had a lot of friction between us. And I went, I thought, this is it. The book

00:38:39 died. It was published and fell into a ditch. This is it. I’m finished. It’s time for me to

00:38:47 do something different. By chance, this was a sabbatical year for my husband. And we spent two

00:38:55 months at the Santa Fe Institute and two months at Caltech. And then the spring semester in Munich,

00:39:03 Germany. Okay. Those two months at the Santa Fe Institute were so restorative for me. And I began

00:39:15 to, the Institute was very small then. It was in some kind of office complex on old Santa Fe trail.

00:39:22 Everybody kept their door open. So you could crack your head on a problem. And if you finally didn’t

00:39:29 get it, you could walk in to see Stuart Kaufman or any number of people and say, I don’t get this.

00:39:39 Can you explain? And one of the people that I was talking to about complex adaptive systems

00:39:46 was Murray Gelman. And I told Murray what Harold Cohen had done. And I said, you know,

00:39:55 this sounds to me like a complex adaptive system. And he said, yeah, it is. Well, what do you know?

00:40:02 Harold Aaron had all these kids and cousins all over the world in science and in economics and

00:40:09 so on and so forth. I was so relieved. I thought, okay, your instincts are okay. You’re doing the

00:40:16 right thing. I didn’t have the vocabulary. And that was one of the things that the Santa Fe

00:40:21 Institute gave me. If I could have rewritten that book, no, it had just come out. I couldn’t rewrite

00:40:26 it. I would have had a vocabulary to explain what Aaron was doing. Okay. So I got really interested

00:40:34 in what was going on at the Institute. The people were, again, bright and funny and willing to

00:40:44 explain anything to this amateur. George Cowan, who was then the head of the Institute, said he

00:40:51 thought it might be a nice idea if I wrote a book about the Institute. And I thought about it and I

00:40:58 had my eye on some other project, God knows what. And I said, I’m sorry, George. Yeah, I’d really

00:41:05 love to do it, but just not going to work for me at this moment. He said, oh, too bad. I think it

00:41:11 would make an interesting book. Well, he was right and I was wrong. I wish I’d done it. But that’s

00:41:17 interesting. I hadn’t thought about that, that that was a road not taken that I wish I’d taken.

00:41:22 Well, you know what? Just on that point, it’s quite brave for you as a writer, as sort of

00:41:31 coming from a world of literature and the literary thinking and historical thinking. I mean, just

00:41:37 from that world and bravely talking to quite, I assume, large egos in AI or in complexity.

00:41:49 Yeah, in AI or in complexity and so on. How’d you do it? I mean, I suppose they could be

00:41:59 intimidated of you as well because it’s two different worlds coming together.

00:42:03 I never picked up that anybody was intimidated by me.

00:42:06 But how were you brave enough? Where did you find the guts to sort of…

00:42:08 God, just dumb luck. I mean, this is an interesting rock to turn over. I’m going

00:42:14 to write a book about it. And you know, people have enough patience with writers

00:42:18 if they think they’re going to end up in a book that they let you flail around and so on.

00:42:24 Well, but they also look if the writer has,

00:42:28 if there’s a sparkle in their eye, if they get it.

00:42:31 Yeah, sure.

00:42:32 When were you at the Santa Fe Institute?

00:42:35 The time I’m talking about is 1990, 1991, 1992. But we then, because Joe was an external faculty

00:42:46 member, were in Santa Fe every summer. We bought a house there and I didn’t have that much to do

00:42:52 with the Institute anymore. I was writing my novels. I was doing whatever I was doing.

00:43:00 But I loved the Institute and I loved

00:43:08 again, the audacity of the ideas. That really appeals to me.

00:43:12 I think that there’s this feeling, much like in great institutes of neuroscience, for example,

00:43:23 that they’re in it for the long game of understanding something fundamental about

00:43:29 reality and nature. And that’s really exciting. So if we start now to look a little bit more recently,

00:43:36 how, you know, AI is really popular today. How is this world, you mentioned algorithmic,

00:43:46 but in general, is the spirit of the people, the kind of conversations you hear through the

00:43:51 grapevine and so on, is that different than the roots that you remember?

00:43:55 No. The same kind of excitement, the same kind of, this is really going to make a difference

00:44:01 in the world. And it will. It has. You know, a lot of folks, especially young, 20 years old or

00:44:07 something, they think we’ve just found something special here. We’re going to change the world

00:44:14 tomorrow. On a time scale, do you have a sense of what, of the time scale at which breakthroughs

00:44:24 of the time scale at which breakthroughs in AI happen? I really don’t. Because look at Deep Learning.

00:44:32 That was, Jeffrey Hinton came up with the algorithm in 86. But it took all these years

00:44:44 for the technology to be good enough to actually be applicable. So no, I can’t predict that at all.

00:44:56 I can’t. I wouldn’t even try. Well, let me ask you to, not to try to predict, but to speak to the,

00:45:03 you know, I’m sure in the 60s, as it continues now, there’s people that think, let’s call it,

00:45:09 we can call it this fun word, the singularity. When there’s a phase shift, there’s some profound

00:45:16 feeling where we’re all really surprised by what’s able to be achieved. I’m sure those dreams are

00:45:22 there. I remember reading quotes in the 60s and those continued. How have your own views,

00:45:29 maybe if you look back, about the timeline of a singularity changed?

00:45:34 Well, I’m not a big fan of the singularity as Ray Kurzweil has presented it.

00:45:46 How would you define the Ray Kurzweil? How do you think of singularity in those?

00:45:53 If I understand Kurzweil’s view, it’s sort of, there’s going to be this moment when machines

00:45:59 are smarter than humans and, you know, game over. However, the game over is. I mean, do they put us

00:46:07 on a reservation? Do they, et cetera, et cetera. And first of all, machines are smarter than humans

00:46:15 in some ways all over the place. And they have been since adding machines were invented.

00:46:21 So it’s not, it’s not going to come like some great eatable crossroads, you know, where

00:46:29 they meet each other and our offspring, Oedipus says, you’re dead. It’s just not going to happen.

00:46:37 Yeah. So it’s already game over with calculators, right? They’re already out to do much better at

00:46:44 basic arithmetic than us. But you know, there’s a human like intelligence. And it’s not the ones

00:46:51 that destroy us, but you know, somebody that you can have as a, as a friend, you can have deep

00:46:57 connections with that kind of passing the touring test and beyond those kinds of ideas. Have you

00:47:04 dreamt of those? Oh yes, yes, yes. Those possibilities. In a book I wrote with Ed Feigenbaum,

00:47:10 a book I wrote with Ed Feigenbaum, there’s a little story called the geriatric robot.

00:47:17 And how I came up with the geriatric robot is a story in itself. But here’s what the geriatric

00:47:24 robot does. It doesn’t just clean you up and feed you and wheel you out into the sun.

00:47:29 It’s great advantages. It listens. It says, tell me again about the great coup of 73. Tell me again

00:47:45 about how awful or how wonderful your grandchildren are and so on and so forth.

00:47:52 And it isn’t hanging around to inherit your money. It isn’t hanging around because it can’t get

00:47:59 any other job. This is his job. And so on and so forth. Well, I would love something like that.

00:48:09 Yeah. I mean, for me, that deeply excites me. So I think there’s a lot of us.

00:48:15 Lex, you gotta know, it was a joke. I dreamed it up because I needed to talk to college students

00:48:20 and I needed to give them some idea of what AI might be. And they were rolling in the aisles as

00:48:26 I elaborated and elaborated and elaborated. When it went into the book, they took my hide off

00:48:36 in the New York Review of Books. This is just what we have thought about these people in AI.

00:48:41 They’re inhuman. Come on, get over it. Don’t you think that’s a good thing for

00:48:47 the world that AI could potentially do? I do. Absolutely. And furthermore,

00:48:52 I’m pushing 80 now. By the time I need help like that, I also want it to roll itself in a corner

00:49:02 and shut the fuck up. Let me linger on that point. Do you really though?

00:49:09 Yeah, I do. Here’s why. Don’t you want it to push back a little bit?

00:49:13 A little. But I have watched my friends go through the whole issue around having help

00:49:20 in the house. And some of them have been very lucky and had fabulous help. And some of them

00:49:28 have had people in the house who want to keep the television going on all day, who want to talk on

00:49:34 their phones all day. No. Just roll yourself in the corner and shut the fuck up. Unfortunately,

00:49:41 us humans, when we’re assistants, we’re still, even when we’re assisting others,

00:49:47 we care about ourselves more. Of course. And so you create more frustration. And a robot AI

00:49:54 assistant can really optimize the experience for you. I was just speaking to the point,

00:50:01 you actually bring up a very, very good point. But I was speaking to the fact that

00:50:05 us humans are a little complicated, that we don’t necessarily want a perfect servant.

00:50:11 I don’t, maybe you disagree with that, but there’s a, I think there’s a push and pull with humans.

00:50:20 You’re right.

00:50:21 A little tension, a little mystery that, of course, that’s really difficult for AI to get right. But

00:50:27 I do sense, especially today with social media, that people are getting more and more lonely,

00:50:34 even young folks, and sometimes especially young folks, that loneliness, there’s a longing for

00:50:42 connection and AI can help alleviate some of that loneliness. Some, just somebody who listens,

00:50:50 like in person. So to speak. So to speak, yeah. So to speak. Yeah, that to me is really exciting.

00:51:03 That is really exciting. But so if we look at that, that level of intelligence, which is

00:51:08 exceptionally difficult to achieve actually, as the singularity or whatever, that’s the human level

00:51:15 bar, that people have dreamt of that too. Turing dreamt of it. He had a date timeline. Do you have,

00:51:23 how have your own timeline evolved on past?

00:51:27 I don’t even think about it.

00:51:28 You don’t even think?

00:51:29 No. Just this field has been so full of surprises for me.

00:51:38 You’re just taking in and see the fun about the basic science.

00:51:42 Yeah. I just can’t. Maybe that’s because I’ve been around the field long enough to think,

00:51:48 you know, don’t go that way. Herb Simon was terrible about making these predictions of

00:51:54 when this and that would happen. And he was a sensible guy.

00:52:00 His quotes are often used, right?

00:52:03 As a legend, yeah.

00:52:04 Yeah. Do you have concerns about AI, the existential threats that many people

00:52:14 like Elon Musk and Sam Harris and others are thinking about?

00:52:18 Yeah. That takes up half a chapter in my book. I call it the male gaze.

00:52:29 Well, you hear me out. The male gaze is actually a term from film criticism.

00:52:36 And I’m blocking on the women who dreamed this up. But she pointed out how most movies were

00:52:44 made from the male point of view, that women were objects, not subjects. They didn’t have any

00:52:53 agency and so on and so forth. So when Elon and his pals Hawking and so on came,

00:53:01 AI is going to eat our lunch and our dinner and our midnight snack too, I thought, what?

00:53:08 And I said to Ed Feigenbaum, oh, this is the first guy. First, these guys have always been

00:53:13 the smartest guy on the block. And here comes something that might be smarter. Oh, let’s stamp

00:53:18 it out before it takes over. And Ed laughed. He said, I didn’t think about it that way.

00:53:24 But I did. I did. And it is the male gaze. Okay, suppose these things do have agency.

00:53:34 Well, let’s wait and see what happens. Can we imbue them with ethics? Can we imbue them

00:53:43 with a sense of empathy? Or are they just going to be, I don’t know, we’ve had centuries of guys

00:53:54 like that. That’s interesting that the ego, the male gaze is immediately threatened. And so you

00:54:05 can’t think in a patient, calm way of how the tech could evolve. Speaking of which, your 96 book,

00:54:16 The Future of Women, I think at the time and now, certainly now, I mean, I’m sorry, maybe at the

00:54:23 time, but I’m more cognizant of now, is extremely relevant. You and Nancy Ramsey talk about four

00:54:30 possible futures of women in science and tech. So if we look at the decades before and after

00:54:38 the book was released, can you tell a history, sorry, of women in science and tech and how it

00:54:46 has evolved? How have things changed? Where do we stand? Not enough. They have not changed enough.

00:54:54 The way that women are ground down in computing is simply unbelievable. But what are the four

00:55:05 possible futures for women in tech from the book? What you’re really looking at are various aspects

00:55:13 of the present. So for each of those, you could say, oh yeah, we do have backlash. Look at what’s

00:55:20 happening with abortion and so on and so forth. We have one step forward, one step back.

00:55:28 The golden age of equality was the hardest chapter to write. And I used something from

00:55:33 the Santa Fe Institute, which is the sandpile effect, that you drop sand very slowly onto a pile

00:55:41 and it grows and it grows and it grows until suddenly it just breaks apart. And

00:55:50 in a way, Me Too has done that. That was the last drop of sand that broke everything apart.

00:55:58 That was a perfect example of the sandpile effect. And that made me feel good. It didn’t

00:56:03 change all of society, but it really woke a lot of people up. But are you in general optimistic

00:56:10 about maybe after Me Too? I mean, Me Too is about a very specific kind of thing.

00:56:17 Boy, solve that and you solve everything.

00:56:19 But are you in general optimistic about the future?

00:56:23 Yes. I’m a congenital optimistic. I can’t help it.

00:56:28 What about AI? What are your thoughts about the future of AI?

00:56:34 Of course, I get asked, what do you worry about? And the one thing I worry about is the things

00:56:40 we can’t anticipate. There’s going to be something out of left field that we will just say,

00:56:47 we weren’t prepared for that. I am generally optimistic. When I first took up

00:56:58 being interested in AI, like most people in the field, more intelligence was like more virtue.

00:57:05 You know, what could be bad? And in a way, I still believe that. But I realize that my

00:57:13 notion of intelligence has broadened. There are many kinds of intelligence,

00:57:19 and we need to imbue our machines with those many kinds.

00:57:24 So you’ve now just finished or in the process of finishing the book that you’ve been working

00:57:32 on, the memoir, how have you changed? I know it’s just writing, but how have you changed

00:57:39 the process? If you look back, what kind of stuff did it bring up to you that surprised you,

00:57:47 looking at the entirety of it all? The biggest thing, and it really wasn’t a surprise,

00:57:55 is how lucky I was. Oh, my. To have access to the beginning of a scientific field that is going to

00:58:07 change the world. How did I luck out? And yes, of course, my view of things has widened a lot.

00:58:20 If I can get back to one feminist part of our conversation. Without knowing it,

00:58:28 it really was subconscious. I wanted AI to succeed because I was so tired of hearing

00:58:36 that intelligence was inside the male cranium. And I thought if there was something out there

00:58:43 that wasn’t a male thinking and doing well, then that would put a lie to this whole notion of

00:58:53 intelligence resides in the male cranium. I did not know that until one night Harold Cohen and I

00:59:01 were having a glass of wine, maybe two, and he said, what drew you to AI? And I said, oh,

00:59:09 you know, smartest people I knew, great project, blah, blah, blah. And I said, and I wanted

00:59:14 something besides male smarts. And it just bubbled up out of me like, what?

00:59:24 It’s kind of brilliant, actually. So AI really humbles all of us and humbles the people that

00:59:32 need to be humbled the most. Let’s hope.

00:59:35 Wow. That is so beautiful. Pamela, thank you so much for talking to me. It’s really a huge honor.

00:59:40 It’s been a great pleasure.

00:59:41 Thank you.