Ben Goertzel: Artificial General Intelligence #103

Transcript

00:00:00 The following is a conversation with Ben Goertzel,

00:00:03 one of the most interesting minds

00:00:04 in the artificial intelligence community.

00:00:06 He’s the founder of SingularityNet,

00:00:08 designer of OpenCog AI Framework,

00:00:11 formerly a director of research

00:00:13 at the Machine Intelligence Research Institute,

00:00:15 and chief scientist of Hanson Robotics,

00:00:18 the company that created the Sophia robot.

00:00:21 He has been a central figure in the AGI community

00:00:23 for many years, including in his organizing

00:00:26 and contributing to the conference

00:00:28 on artificial general intelligence,

00:00:30 the 2020 version of which is actually happening this week,

00:00:34 Wednesday, Thursday, and Friday.

00:00:36 It’s virtual and free.

00:00:38 I encourage you to check out the talks,

00:00:40 including by Yosha Bach from episode 101 of this podcast.

00:00:45 Quick summary of the ads.

00:00:46 Two sponsors, The Jordan Harbinger Show and Masterclass.

00:00:51 Please consider supporting this podcast

00:00:52 by going to jordanharbinger.com slash lex

00:00:56 and signing up at masterclass.com slash lex.

00:01:00 Click the links, buy all the stuff.

00:01:02 It’s the best way to support this podcast

00:01:04 and the journey I’m on in my research and startup.

00:01:08 This is the Artificial Intelligence Podcast.

00:01:11 If you enjoy it, subscribe on YouTube,

00:01:13 review it with five stars on Apple Podcast,

00:01:15 support it on Patreon, or connect with me on Twitter

00:01:18 at lexfriedman, spelled without the E, just F R I D M A N.

00:01:23 As usual, I’ll do a few minutes of ads now

00:01:25 and never any ads in the middle

00:01:27 that can break the flow of the conversation.

00:01:29 This episode is supported by The Jordan Harbinger Show.

00:01:33 Go to jordanharbinger.com slash lex.

00:01:35 It’s how he knows I sent you.

00:01:37 On that page, there’s links to subscribe to it

00:01:40 on Apple Podcast, Spotify, and everywhere else.

00:01:43 I’ve been binging on his podcast.

00:01:45 Jordan is great.

00:01:46 He gets the best out of his guests,

00:01:47 dives deep, calls them out when it’s needed,

00:01:50 and makes the whole thing fun to listen to.

00:01:52 He’s interviewed Kobe Bryant, Mark Cuban,

00:01:55 Neil deGrasse Tyson, Keira Kasparov, and many more.

00:01:59 His conversation with Kobe is a reminder

00:02:01 how much focus and hard work is required for greatness

00:02:06 in sport, business, and life.

00:02:09 I highly recommend the episode if you want to be inspired.

00:02:12 Again, go to jordanharbinger.com slash lex.

00:02:15 It’s how Jordan knows I sent you.

00:02:18 This show is sponsored by Master Class.

00:02:21 Sign up at masterclass.com slash lex

00:02:24 to get a discount and to support this podcast.

00:02:27 When I first heard about Master Class,

00:02:29 I thought it was too good to be true.

00:02:31 For 180 bucks a year, you get an all access pass

00:02:34 to watch courses from to list some of my favorites.

00:02:37 Chris Hadfield on Space Exploration,

00:02:39 Neil deGrasse Tyson on Scientific Thinking

00:02:41 and Communication, Will Wright, creator of

00:02:44 the greatest city building game ever, Sim City,

00:02:47 and Sims on Space Exploration.

00:02:50 Ben Sims on Game Design, Carlos Santana on Guitar,

00:02:54 Keira Kasparov, the greatest chess player ever on chess,

00:02:59 Daniel Negrano on Poker, and many more.

00:03:01 Chris Hadfield explaining how rockets work

00:03:04 and the experience of being launched into space alone

00:03:07 is worth the money.

00:03:08 Once again, sign up at masterclass.com slash lex

00:03:12 to get a discount and to support this podcast.

00:03:15 Now, here’s my conversation with Ben Kurtzell.

00:03:20 What books, authors, ideas had a lot of impact on you

00:03:25 in your life in the early days?

00:03:27 You know, what got me into AI and science fiction

00:03:32 and such in the first place wasn’t a book,

00:03:34 but the original Star Trek TV show,

00:03:37 which my dad watched with me like in its first run.

00:03:39 It would have been 1968, 69 or something,

00:03:42 and that was incredible because every show

00:03:45 they visited a different alien civilization

00:03:49 with different culture and weird mechanisms.

00:03:51 But that got me into science fiction,

00:03:55 and there wasn’t that much science fiction

00:03:57 to watch on TV at that stage,

00:03:58 so that got me into reading the whole literature

00:04:01 of science fiction, you know,

00:04:03 from the beginning of the previous century until that time.

00:04:07 And I mean, there was so many science fiction writers

00:04:10 who were inspirational to me.

00:04:12 I’d say if I had to pick two,

00:04:14 it would have been Stanisław Lem, the Polish writer.

00:04:18 Yeah, Solaris, and then he had a bunch

00:04:22 of more obscure writings on superhuman AIs

00:04:25 that were engineered.

00:04:26 Solaris was sort of a superhuman,

00:04:28 naturally occurring intelligence.

00:04:31 Then Philip K. Dick, who, you know,

00:04:34 ultimately my fandom for Philip K. Dick

00:04:37 is one of the things that brought me together

00:04:39 with David Hansen, my collaborator on robotics projects.

00:04:43 So, you know, Stanisław Lem was very much an intellectual,

00:04:47 right, so he had a very broad view of intelligence

00:04:51 going beyond the human and into what I would call,

00:04:54 you know, open ended superintelligence.

00:04:56 The Solaris superintelligent ocean was intelligent,

00:05:01 in some ways more generally intelligent than people,

00:05:04 but in a complex and confusing way

00:05:07 so that human beings could never quite connect to it,

00:05:10 but it was still probably very, very smart.

00:05:13 And then the Golem 4 supercomputer

00:05:16 in one of Lem’s books, this was engineered by people,

00:05:20 but eventually it became very intelligent

00:05:24 in a different direction than humans

00:05:26 and decided that humans were kind of trivial,

00:05:29 not that interesting.

00:05:30 So it put some impenetrable shield around itself,

00:05:35 shut itself off from humanity,

00:05:36 and then issued some philosophical screed

00:05:40 about the pathetic and hopeless nature of humanity

00:05:44 and all human thought, and then disappeared.

00:05:48 Now, Philip K. Dick, he was a bit different.

00:05:51 He was human focused, right?

00:05:52 His main thing was, you know, human compassion

00:05:55 and the human heart and soul are going to be the constant

00:05:59 that will keep us going through whatever aliens we discover

00:06:03 or telepathy machines or super AIs or whatever it might be.

00:06:08 So he didn’t believe in reality,

00:06:10 like the reality that we see may be a simulation

00:06:13 or a dream or something else we can’t even comprehend,

00:06:17 but he believed in love and compassion

00:06:19 as something persistent

00:06:20 through the various simulated realities.

00:06:22 So those two science fiction writers had a huge impact on me.

00:06:26 Then a little older than that, I got into Dostoevsky

00:06:30 and Friedrich Nietzsche and Rimbaud

00:06:33 and a bunch of more literary type writing.

00:06:36 Can we talk about some of those things?

00:06:38 So on the Solaris side, Stanislaw Lem,

00:06:43 this kind of idea of there being intelligences out there

00:06:47 that are different than our own,

00:06:49 do you think there are intelligences maybe all around us

00:06:53 that we’re not able to even detect?

00:06:56 So this kind of idea of,

00:06:58 maybe you can comment also on Stephen Wolfram

00:07:01 thinking that there’s computations all around us

00:07:04 and we’re just not smart enough to kind of detect

00:07:07 their intelligence or appreciate their intelligence.

00:07:10 Yeah, so my friend Hugo de Gares,

00:07:13 who I’ve been talking to about these things

00:07:15 for many decades, since the early 90s,

00:07:19 he had an idea he called SIPI,

00:07:21 the Search for Intraparticulate Intelligence.

00:07:25 So the concept there was as AIs get smarter

00:07:28 and smarter and smarter,

00:07:30 assuming the laws of physics as we know them now

00:07:33 are still what these super intelligences

00:07:37 perceived to hold and are bound by,

00:07:39 as they get smarter and smarter,

00:07:40 they’re gonna shrink themselves littler and littler

00:07:42 because special relativity makes it

00:07:45 so they can communicate

00:07:47 between two spatially distant points.

00:07:49 So they’re gonna get smaller and smaller,

00:07:50 but then ultimately, what does that mean?

00:07:53 The minds of the super, super, super intelligences,

00:07:56 they’re gonna be packed into the interaction

00:07:59 of elementary particles or quarks

00:08:01 or the partons inside quarks or whatever it is.

00:08:04 So what we perceive as random fluctuations

00:08:07 on the quantum or sub quantum level

00:08:09 may actually be the thoughts

00:08:11 of the micro, micro, micro miniaturized super intelligences

00:08:16 because there’s no way we can tell random

00:08:19 from structured but within algorithmic information

00:08:21 more complex than our brains, right?

00:08:23 We can’t tell the difference.

00:08:24 So what we think is random could be the thought processes

00:08:27 of some really tiny super minds.

00:08:29 And if so, there is not a damn thing we can do about it,

00:08:34 except try to upgrade our intelligences

00:08:37 and expand our minds so that we can perceive

00:08:40 more of what’s around us.

00:08:41 But if those random fluctuations,

00:08:43 like even if we go to like quantum mechanics,

00:08:46 if that’s actually super intelligent systems,

00:08:51 aren’t we then part of the super of super intelligence?

00:08:54 Aren’t we just like a finger of the entirety

00:08:58 of the body of the super intelligent system?

00:09:01 It could be, I mean, a finger is a strange metaphor.

00:09:05 I mean, we…

00:09:08 A finger is dumb is what I mean.

00:09:10 But the finger is also useful

00:09:12 and is controlled with intent by the brain

00:09:14 whereas we may be much less than that, right?

00:09:16 I mean, yeah, we may be just some random epiphenomenon

00:09:21 that they don’t care about too much.

00:09:23 Like think about the shape of the crowd emanating

00:09:26 from a sports stadium or something, right?

00:09:28 There’s some emergent shape to the crowd, it’s there.

00:09:31 You could take a picture of it, it’s kind of cool.

00:09:33 It’s irrelevant to the main point of the sports event

00:09:36 or where the people are going

00:09:37 or what’s on the minds of the people

00:09:40 making that shape in the crowd, right?

00:09:41 So we may just be some semi arbitrary higher level pattern

00:09:47 popping out of a lower level

00:09:49 hyper intelligent self organization.

00:09:52 And I mean, so be it, right?

00:09:55 I mean, that’s one thing that…

00:09:57 Yeah, I mean, the older I’ve gotten,

00:09:59 the more respect I’ve achieved for our fundamental ignorance.

00:10:04 I mean, mine and everybody else’s.

00:10:06 I mean, I look at my two dogs,

00:10:08 two beautiful little toy poodles

00:10:10 and they watch me sitting at the computer typing.

00:10:14 They just think I’m sitting there wiggling my fingers

00:10:16 to exercise them maybe or guarding the monitor on the desk

00:10:19 that they have no idea that I’m communicating

00:10:22 with other people halfway around the world,

00:10:24 let alone creating complex algorithms

00:10:27 running in RAM on some computer server

00:10:30 in St. Petersburg or something, right?

00:10:32 Although they’re right there in the room with me.

00:10:35 So what things are there right around us

00:10:37 that we’re just too stupid or close minded to comprehend?

00:10:40 Probably quite a lot.

00:10:42 Your very poodle could also be communicating

00:10:46 across multiple dimensions with other beings

00:10:49 and you’re too unintelligent to understand

00:10:53 the kind of communication mechanism they’re going through.

00:10:55 There have been various TV shows and science fiction novels,

00:10:59 poisoning cats, dolphins, mice and whatnot

00:11:03 are actually super intelligences here to observe that.

00:11:07 I would guess as one or the other quantum physics founders

00:11:12 said, those theories are not crazy enough to be true.

00:11:15 The reality is probably crazier than that.

00:11:17 Beautifully put.

00:11:18 So on the human side, with Philip K. Dick

00:11:22 and in general, where do you fall on this idea

00:11:27 that love and just the basic spirit of human nature

00:11:30 persists throughout these multiple realities?

00:11:34 Are you on the side, like the thing that inspires you

00:11:38 about artificial intelligence,

00:11:40 is it the human side of somehow persisting

00:11:46 through all of the different systems we engineer

00:11:49 or is AI inspire you to create something

00:11:53 that’s greater than human, that’s beyond human,

00:11:55 that’s almost nonhuman?

00:11:59 I would say my motivation to create AGI

00:12:02 comes from both of those directions actually.

00:12:05 So when I first became passionate about AGI

00:12:08 when I was, it would have been two or three years old

00:12:11 after watching robots on Star Trek.

00:12:14 I mean, then it was really a combination

00:12:18 of intellectual curiosity, like can a machine really think,

00:12:21 how would you do that?

00:12:22 And yeah, just ambition to create something much better

00:12:27 than all the clearly limited

00:12:28 and fundamentally defective humans I saw around me.

00:12:31 Then as I got older and got more enmeshed

00:12:35 in the human world and got married, had children,

00:12:38 saw my parents begin to age, I started to realize,

00:12:41 well, not only will AGI let you go far beyond

00:12:45 the limitations of the human,

00:12:46 but it could also stop us from dying and suffering

00:12:50 and feeling pain and tormenting ourselves mentally.

00:12:54 So you can see AGI has amazing capability

00:12:58 to do good for humans, as humans,

00:13:01 alongside with its capability

00:13:03 to go far, far beyond the human level.

00:13:06 So I mean, both aspects are there,

00:13:09 which makes it even more exciting and important.

00:13:13 So you mentioned Dostoevsky and Nietzsche.

00:13:15 Where did you pick up from those guys?

00:13:17 I mean.

00:13:18 That would probably go beyond the scope

00:13:21 of a brief interview, certainly.

00:13:24 I mean, both of those are amazing thinkers

00:13:26 who one, will necessarily have

00:13:29 a complex relationship with, right?

00:13:32 So, I mean, Dostoevsky on the minus side,

00:13:36 he’s kind of a religious fanatic

00:13:38 and he sort of helped squash the Russian nihilist movement,

00:13:42 which was very interesting.

00:13:43 Because what nihilism meant originally

00:13:45 in that period of the mid, late 1800s in Russia

00:13:48 was not taking anything fully 100% for granted.

00:13:52 It was really more like what we’d call Bayesianism now,

00:13:54 where you don’t wanna adopt anything

00:13:56 as a dogmatic certitude and always leave your mind open.

00:14:01 And how Dostoevsky parodied nihilism

00:14:04 was a bit different, right?

00:14:06 He parodied as people who believe absolutely nothing.

00:14:10 So they must assign an equal probability weight

00:14:13 to every proposition, which doesn’t really work.

00:14:17 So on the one hand, I didn’t really agree with Dostoevsky

00:14:22 on his sort of religious point of view.

00:14:26 On the other hand, if you look at his understanding

00:14:29 of human nature and sort of the human mind

00:14:32 and heart and soul, it’s really unparalleled.

00:14:37 He had an amazing view of how human beings construct a world

00:14:42 for themselves based on their own understanding

00:14:45 and their own mental predisposition.

00:14:47 And I think if you look in the brothers Karamazov

00:14:50 in particular, the Russian literary theorist Mikhail Bakhtin

00:14:56 wrote about this as a polyphonic mode of fiction,

00:14:59 which means it’s not third person,

00:15:02 but it’s not first person from any one person really.

00:15:05 There are many different characters in the novel

00:15:07 and each of them is sort of telling part of the story

00:15:10 from their own point of view.

00:15:11 So the reality of the whole story is an intersection

00:15:15 like synergetically of the many different characters

00:15:19 world views.

00:15:19 And that really, it’s a beautiful metaphor

00:15:23 and even a reflection I think of how all of us

00:15:26 socially create our reality.

00:15:27 Like each of us sees the world in a certain way.

00:15:31 Each of us in a sense is making the world as we see it

00:15:34 based on our own minds and understanding,

00:15:37 but it’s polyphony like in music

00:15:40 where multiple instruments are coming together

00:15:43 to create the sound.

00:15:44 The ultimate reality that’s created

00:15:46 comes out of each of our subjective understandings,

00:15:50 intersecting with each other.

00:15:51 And that was one of the many beautiful things in Dostoevsky.

00:15:55 So maybe a little bit to mention,

00:15:57 you have a connection to Russia and the Soviet culture.

00:16:02 I mean, I’m not sure exactly what the nature

00:16:03 of the connection is, but at least the spirit

00:16:06 of your thinking is in there.

00:16:07 Well, my ancestry is three quarters Eastern European Jewish.

00:16:12 So I mean, my three of my great grandparents

00:16:16 emigrated to New York from Lithuania

00:16:20 and sort of border regions of Poland,

00:16:23 which are in and out of Poland

00:16:24 in around the time of World War I.

00:16:28 And they were socialists and communists as well as Jews,

00:16:33 mostly Menshevik, not Bolshevik.

00:16:35 And they sort of, they fled at just the right time

00:16:39 to the US for their own personal reasons.

00:16:41 And then almost all, or maybe all of my extended family

00:16:45 that remained in Eastern Europe was killed

00:16:47 either by Hitlands or Stalin’s minions at some point.

00:16:50 So the branch of the family that emigrated to the US

00:16:53 was pretty much the only one.

00:16:56 So how much of the spirit of the people

00:16:58 is in your blood still?

00:16:59 Like, when you look in the mirror, do you see,

00:17:03 what do you see?

00:17:04 Meat, I see a bag of meat that I want to transcend

00:17:08 by uploading into some sort of superior reality.

00:17:12 But very, I mean, yeah, very clearly,

00:17:18 I mean, I’m not religious in a traditional sense,

00:17:22 but clearly the Eastern European Jewish tradition

00:17:27 was what I was raised in.

00:17:28 I mean, there was, my grandfather, Leo Zwell,

00:17:32 was a physical chemist who worked with Linus Pauling

00:17:35 and a bunch of the other early greats in quantum mechanics.

00:17:38 I mean, he was into X ray diffraction.

00:17:41 He was on the material science side,

00:17:42 an experimentalist rather than a theorist.

00:17:45 His sister was also a physicist.

00:17:47 And my father’s father, Victor Gertzel,

00:17:51 was a PhD in psychology who had the unenviable job

00:17:57 of giving Soka therapy to the Japanese

00:17:59 in internment camps in the US in World War II,

00:18:03 like to counsel them why they shouldn’t kill themselves,

00:18:05 even though they’d had all their stuff taken away

00:18:08 and been imprisoned for no good reason.

00:18:10 So, I mean, yeah, there’s a lot of Eastern European

00:18:15 Jewishness in my background.

00:18:18 One of my great uncles was, I guess,

00:18:20 conductor of San Francisco Orchestra.

00:18:22 So there’s a lot of Mickey Salkind,

00:18:25 bunch of music in there also.

00:18:27 And clearly this culture was all about learning

00:18:31 and understanding the world,

00:18:34 and also not quite taking yourself too seriously

00:18:38 while you do it, right?

00:18:39 There’s a lot of Yiddish humor in there.

00:18:42 So I do appreciate that culture,

00:18:45 although the whole idea that like the Jews

00:18:47 are the chosen people of God

00:18:49 never resonated with me too much.

00:18:51 The graph of the Gertzel family,

00:18:55 I mean, just the people I’ve encountered

00:18:56 just doing some research and just knowing your work

00:18:59 through the decades, it’s kind of fascinating.

00:19:03 Just the number of PhDs.

00:19:06 Yeah, yeah, I mean, my dad is a sociology professor

00:19:10 who recently retired from Rutgers University,

00:19:15 but clearly that gave me a head start in life.

00:19:18 I mean, my grandfather gave me

00:19:20 all those quantum mechanics books

00:19:21 when I was like seven or eight years old.

00:19:24 I remember going through them,

00:19:26 and it was all the old quantum mechanics

00:19:28 like Rutherford Adams and stuff.

00:19:30 So I got to the part of wave functions,

00:19:32 which I didn’t understand, although I was very bright kid.

00:19:36 And I realized he didn’t quite understand it either,

00:19:38 but at least like he pointed me to some professor

00:19:41 he knew at UPenn nearby who understood these things, right?

00:19:45 So that’s an unusual opportunity for a kid to have, right?

00:19:49 My dad, he was programming Fortran

00:19:52 when I was 10 or 11 years old

00:19:53 on like HP 3000 mainframes at Rutgers University.

00:19:57 So I got to do linear regression in Fortran

00:20:00 on punch cards when I was in middle school, right?

00:20:04 Because he was doing, I guess, analysis of demographic

00:20:07 and sociology data.

00:20:09 So yes, certainly that gave me a head start

00:20:14 and a push towards science beyond what would have been

00:20:17 the case with many, many different situations.

00:20:19 When did you first fall in love with AI?

00:20:22 Is it the programming side of Fortran?

00:20:24 Is it maybe the sociology psychology

00:20:27 that you picked up from your dad?

00:20:28 Or is it the quantum mechanics?

00:20:29 I fell in love with AI when I was probably three years old

00:20:30 when I saw a robot on Star Trek.

00:20:32 It was turning around in a circle going,

00:20:34 error, error, error, error,

00:20:36 because Spock and Kirk had tricked it

00:20:39 into a mechanical breakdown by presenting it

00:20:41 with a logical paradox.

00:20:42 And I was just like, well, this makes no sense.

00:20:45 This AI is very, very smart.

00:20:47 It’s been traveling all around the universe,

00:20:49 but these people could trick it

00:20:50 with a simple logical paradox.

00:20:52 Like why, if the human brain can get beyond that paradox,

00:20:57 why can’t this AI?

00:20:59 So I felt the screenwriters of Star Trek

00:21:03 had misunderstood the nature of intelligence.

00:21:06 And I complained to my dad about it,

00:21:07 and he wasn’t gonna say anything one way or the other.

00:21:12 But before I was born, when my dad was at Antioch College

00:21:18 in the middle of the US,

00:21:20 he led a protest movement called SLAM,

00:21:25 Student League Against Mortality.

00:21:27 They were protesting against death,

00:21:28 wandering across the campus.

00:21:31 So he was into some futuristic things even back then,

00:21:35 but whether AI could confront logical paradoxes or not,

00:21:40 he didn’t know.

00:21:41 But when I, 10 years after that or something,

00:21:44 I discovered Douglas Hofstadter’s book,

00:21:46 Gordalesh or Bach, and that was sort of to the same point of AI

00:21:51 and paradox and logic, right?

00:21:52 Because he was over and over

00:21:54 with Gordal’s incompleteness theorem,

00:21:56 and can an AI really fully model itself reflexively

00:22:00 or does that lead you into some paradox?

00:22:02 Can the human mind truly model itself reflexively

00:22:05 or does that lead you into some paradox?

00:22:07 So I think that book, Gordalesh or Bach,

00:22:10 which I think I read when it first came out,

00:22:13 I would have been 12 years old or something.

00:22:14 I remember it was like 16 hour day.

00:22:17 I read it cover to cover and then reread it.

00:22:19 I reread it after that,

00:22:21 because there was a lot of weird things

00:22:22 with little formal systems in there

00:22:24 that were hard for me at the time.

00:22:25 But that was the first book I read

00:22:27 that gave me a feeling for AI as like a practical academic

00:22:34 or engineering discipline that people were working in.

00:22:37 Because before I read Gordalesh or Bach,

00:22:40 I was into AI from the point of view of a science fiction fan.

00:22:43 And I had the idea, well, it may be a long time

00:22:47 before we can achieve immortality in superhuman AGI.

00:22:50 So I should figure out how to build a spacecraft

00:22:54 traveling close to the speed of light, go far away,

00:22:57 then come back to the earth in a million years

00:22:58 when technology is more advanced

00:23:00 and we can build these things.

00:23:01 Reading Gordalesh or Bach,

00:23:03 while it didn’t all ring true to me, a lot of it did,

00:23:06 but I could see like there are smart people right now

00:23:09 at various universities around me

00:23:11 who are actually trying to work on building

00:23:15 what I would now call AGI,

00:23:16 although Hofstadter didn’t call it that.

00:23:19 So really it was when I read that book,

00:23:21 which would have been probably middle school,

00:23:23 that then I started to think,

00:23:24 well, this is something that I could practically work on.

00:23:29 Yeah, as opposed to flying away and waiting it out,

00:23:31 you can actually be one of the people

00:23:33 that actually builds the system.

00:23:34 Yeah, exactly.

00:23:35 And if you think about, I mean,

00:23:36 I was interested in what we’d now call nanotechnology

00:23:40 and in the human immortality and time travel,

00:23:44 all the same cool things as every other,

00:23:46 like science fiction loving kid.

00:23:49 But AI seemed like if Hofstadter was right,

00:23:52 you just figure out the right program,

00:23:54 sit there and type it.

00:23:55 Like you don’t need to spin stars into weird configurations

00:23:59 or get government approval to cut people up

00:24:02 and fiddle with their DNA or something, right?

00:24:05 It’s just programming.

00:24:06 And then of course that can achieve anything else.

00:24:10 There’s another book from back then,

00:24:12 which was by Gerald Feinbaum,

00:24:17 who was a physicist at Princeton.

00:24:21 And that was the Prometheus Project.

00:24:24 And this book was written in the late 1960s,

00:24:26 though I encountered it in the mid 70s.

00:24:28 But what this book said is in the next few decades,

00:24:30 humanity is gonna create superhuman thinking machines,

00:24:34 molecular nanotechnology and human immortality.

00:24:37 And then the challenge we’ll have is what to do with it.

00:24:41 Do we use it to expand human consciousness

00:24:43 in a positive direction?

00:24:44 Or do we use it just to further vapid consumerism?

00:24:49 And what he proposed was that the UN

00:24:51 should do a survey on this.

00:24:53 And the UN should send people out to every little village

00:24:56 in remotest Africa or South America

00:24:58 and explain to everyone what technology

00:25:01 was gonna bring the next few decades

00:25:03 and the choice that we had about how to use it.

00:25:05 And let everyone on the whole planet vote

00:25:07 about whether we should develop super AI nanotechnology

00:25:11 and immortality for expanded consciousness

00:25:15 or for rampant consumerism.

00:25:18 And needless to say, that didn’t quite happen.

00:25:22 And I think this guy died in the mid 80s,

00:25:24 so we didn’t even see his ideas start

00:25:25 to become more mainstream.

00:25:28 But it’s interesting, many of the themes I’m engaged with now

00:25:31 from AGI and immortality,

00:25:33 even to trying to democratize technology

00:25:36 as I’ve been pushing forward with Singularity,

00:25:38 my work in the blockchain world,

00:25:40 many of these themes were there in Feinbaum’s book

00:25:43 in the late 60s even.

00:25:47 And of course, Valentin Turchin, a Russian writer

00:25:52 and a great Russian physicist who I got to know

00:25:55 when we both lived in New York in the late 90s

00:25:59 and early aughts.

00:25:59 I mean, he had a book in the late 60s in Russia,

00:26:03 which was the phenomenon of science,

00:26:05 which laid out all these same things as well.

00:26:10 And Val died in, I don’t remember,

00:26:12 2004 or five or something of Parkinson’sism.

00:26:15 So yeah, it’s easy for people to lose track now

00:26:20 of the fact that the futurist and Singularitarian

00:26:25 advanced technology ideas that are now almost mainstream

00:26:29 are on TV all the time.

00:26:30 I mean, these are not that new, right?

00:26:34 They’re sort of new in the history of the human species,

00:26:37 but I mean, these were all around in fairly mature form

00:26:41 in the middle of the last century,

00:26:43 were written about quite articulately

00:26:45 by fairly mainstream people

00:26:47 who were professors at top universities.

00:26:50 It’s just until the enabling technologies

00:26:52 got to a certain point, then you couldn’t make it real.

00:26:57 And even in the 70s, I was sort of seeing that

00:27:02 and living through it, right?

00:27:04 From Star Trek to Douglas Hofstadter,

00:27:07 things were getting very, very practical

00:27:09 from the late 60s to the late 70s.

00:27:11 And the first computer I bought,

00:27:15 you could only program with hexadecimal machine code

00:27:17 and you had to solder it together.

00:27:19 And then like a few years later, there’s punch cards.

00:27:23 And a few years later, you could get like Atari 400

00:27:27 and Commodore VIC 20, and you could type on the keyboard

00:27:30 and program in higher level languages

00:27:32 alongside the assembly language.

00:27:34 So these ideas have been building up a while.

00:27:38 And I guess my generation got to feel them build up,

00:27:42 which is different than people coming into the field now

00:27:46 for whom these things have just been part of the ambience

00:27:50 of culture for their whole career

00:27:52 or even their whole life.

00:27:54 Well, it’s fascinating to think about there being all

00:27:57 of these ideas kind of swimming, almost with the noise

00:28:01 all around the world, all the different generations,

00:28:04 and then some kind of nonlinear thing happens

00:28:07 where they percolate up

00:28:09 and capture the imagination of the mainstream.

00:28:12 And that seems to be what’s happening with AI now.

00:28:14 I mean, Nietzsche, who you mentioned had the idea

00:28:16 of the Superman, right?

00:28:18 But he didn’t understand enough about technology

00:28:21 to think you could physically engineer a Superman

00:28:24 by piecing together molecules in a certain way.

00:28:28 He was a bit vague about how the Superman would appear,

00:28:33 but he was quite deep at thinking

00:28:35 about what the state of consciousness

00:28:37 and the mode of cognition of a Superman would be.

00:28:42 He was a very astute analyst of how the human mind

00:28:47 constructs the illusion of a self,

00:28:49 how it constructs the illusion of free will,

00:28:52 how it constructs values like good and evil

00:28:56 out of its own desire to maintain

00:28:59 and advance its own organism.

00:29:01 He understood a lot about how human minds work.

00:29:04 Then he understood a lot

00:29:05 about how post human minds would work.

00:29:07 I mean, the Superman was supposed to be a mind

00:29:10 that would basically have complete root access

00:29:13 to its own brain and consciousness

00:29:16 and be able to architect its own value system

00:29:19 and inspect and fine tune all of its own biases.

00:29:24 So that’s a lot of powerful thinking there,

00:29:27 which then fed in and sort of seeded

00:29:29 all of postmodern continental philosophy

00:29:32 and all sorts of things have been very valuable

00:29:35 in development of culture and indirectly even of technology.

00:29:39 But of course, without the technology there,

00:29:42 it was all some quite abstract thinking.

00:29:44 So now we’re at a time in history

00:29:46 when a lot of these ideas can be made real,

00:29:51 which is amazing and scary, right?

00:29:54 It’s kind of interesting to think,

00:29:56 what do you think Nietzsche would do

00:29:57 if he was born a century later or transported through time?

00:30:00 What do you think he would say about AI?

00:30:02 I mean. Well, those are quite different.

00:30:04 If he’s born a century later or transported through time.

00:30:07 Well, he’d be on like TikTok and Instagram

00:30:09 and he would never write the great works he’s written.

00:30:11 So let’s transport him through time.

00:30:13 Maybe also Sprach Zarathustra would be a music video,

00:30:16 right? I mean, who knows?

00:30:19 Yeah, but if he was transported through time,

00:30:21 do you think, that’d be interesting actually to go back.

00:30:26 You just made me realize that it’s possible to go back

00:30:29 and read Nietzsche with an eye of,

00:30:31 is there some thinking about artificial beings?

00:30:34 I’m sure there he had inklings.

00:30:37 I mean, with Frankenstein before him,

00:30:40 I’m sure he had inklings of artificial beings

00:30:42 somewhere in the text.

00:30:44 It’d be interesting to try to read his work

00:30:46 to see if Superman was actually an AGI system.

00:30:55 Like if he had inklings of that kind of thinking.

00:30:57 He didn’t.

00:30:58 He didn’t.

00:30:59 No, I would say not.

00:31:01 I mean, he had a lot of inklings of modern cognitive science,

00:31:06 which are very interesting.

00:31:07 If you look in like the third part of the collection

00:31:11 that’s been titled The Will to Power.

00:31:13 I mean, in book three there,

00:31:15 there’s very deep analysis of thinking processes,

00:31:20 but he wasn’t so much of a physical tinkerer type guy,

00:31:27 right? He was very abstract.

00:31:29 Do you think, what do you think about the will to power?

00:31:32 Do you think human, what do you think drives humans?

00:31:36 Is it?

00:31:37 Oh, an unholy mix of things.

00:31:39 I don’t think there’s one pure, simple,

00:31:42 and elegant objective function driving humans by any means.

00:31:47 What do you think, if we look at,

00:31:50 I know it’s hard to look at humans in an aggregate,

00:31:53 but do you think overall humans are good?

00:31:57 Or do we have both good and evil within us

00:32:01 that depending on the circumstances,

00:32:03 depending on whatever can percolate to the top?

00:32:08 Good and evil are very ambiguous, complicated

00:32:13 and in some ways silly concepts.

00:32:15 But if we could dig into your question

00:32:18 from a couple of directions.

00:32:19 So I think if you look in evolution,

00:32:23 humanity is shaped both by individual selection

00:32:28 and what biologists would call group selection,

00:32:30 like tribe level selection, right?

00:32:32 So individual selection has driven us

00:32:36 in a selfish DNA sort of way.

00:32:38 So that each of us does to a certain approximation

00:32:43 what will help us propagate our DNA to future generations.

00:32:47 I mean, that’s why I’ve got four kids so far

00:32:50 and probably that’s not the last one.

00:32:53 On the other hand.

00:32:55 I like the ambition.

00:32:56 Tribal, like group selection means humans in a way

00:33:00 will do what will advocate for the persistence of the DNA

00:33:04 of their whole tribe or their social group.

00:33:08 And in biology, you have both of these, right?

00:33:11 And you can see, say an ant colony or a beehive,

00:33:14 there’s a lot of group selection

00:33:15 in the evolution of those social animals.

00:33:18 On the other hand, say a big cat

00:33:21 or some very solitary animal,

00:33:23 it’s a lot more biased toward individual selection.

00:33:26 Humans are an interesting balance.

00:33:28 And I think this reflects itself

00:33:31 in what we would view as selfishness versus altruism

00:33:35 to some extent.

00:33:36 So we just have both of those objective functions

00:33:40 contributing to the makeup of our brains.

00:33:43 And then as Nietzsche analyzed in his own way

00:33:47 and others have analyzed in different ways,

00:33:49 I mean, we abstract this as well,

00:33:51 we have both good and evil within us, right?

00:33:55 Because a lot of what we view as evil

00:33:57 is really just selfishness.

00:34:00 A lot of what we view as good is altruism,

00:34:03 which means doing what’s good for the tribe.

00:34:07 And on that level,

00:34:08 we have both of those just baked into us

00:34:11 and that’s how it is.

00:34:13 Of course, there are psychopaths and sociopaths

00:34:17 and people who get gratified by the suffering of others.

00:34:21 And that’s a different thing.

00:34:25 Yeah, those are exceptions on the whole.

00:34:27 But I think at core, we’re not purely selfish,

00:34:31 we’re not purely altruistic, we are a mix

00:34:35 and that’s the nature of it.

00:34:38 And we also have a complex constellation of values

00:34:43 that are just very specific to our evolutionary history.

00:34:49 Like we love waterways and mountains

00:34:52 and the ideal place to put a house

00:34:54 is in a mountain overlooking the water, right?

00:34:56 And we care a lot about our kids

00:35:00 and we care a little less about our cousins

00:35:02 and even less about our fifth cousins.

00:35:04 I mean, there are many particularities to human values,

00:35:09 which whether they’re good or evil

00:35:11 depends on your perspective.

00:35:15 Say, I spent a lot of time in Ethiopia in Addis Ababa

00:35:19 where we have one of our AI development offices

00:35:22 for my SingularityNet project.

00:35:24 And when I walk through the streets in Addis,

00:35:27 you know, there’s people lying by the side of the road,

00:35:31 like just living there by the side of the road,

00:35:33 dying probably of curable diseases

00:35:35 without enough food or medicine.

00:35:37 And when I walk by them, you know, I feel terrible,

00:35:39 I give them money.

00:35:41 When I come back home to the developed world,

00:35:45 they’re not on my mind that much.

00:35:46 I do donate some, but I mean,

00:35:48 I also spend some of the limited money I have

00:35:52 enjoying myself in frivolous ways

00:35:54 rather than donating it to those people who are right now,

00:35:58 like starving, dying and suffering on the roadside.

00:36:01 So does that make me evil?

00:36:03 I mean, it makes me somewhat selfish

00:36:05 and somewhat altruistic.

00:36:06 And we each balance that in our own way, right?

00:36:10 So whether that will be true of all possible AGI’s

00:36:17 is a subtler question.

00:36:19 So that’s how humans are.

00:36:21 So you have a sense, you kind of mentioned

00:36:23 that there’s a selfish,

00:36:25 I’m not gonna bring up the whole Ayn Rand idea

00:36:28 of selfishness being the core virtue.

00:36:31 That’s a whole interesting kind of tangent

00:36:33 that I think we’ll just distract ourselves on.

00:36:36 I have to make one amusing comment.

00:36:38 Sure.

00:36:39 A comment that has amused me anyway.

00:36:41 So the, yeah, I have extraordinary negative respect

00:36:46 for Ayn Rand.

00:36:47 Negative, what’s a negative respect?

00:36:50 But when I worked with a company called Genescient,

00:36:54 which was evolving flies to have extraordinary long lives

00:36:59 in Southern California.

00:37:01 So we had flies that were evolved by artificial selection

00:37:04 to have five times the lifespan of normal fruit flies.

00:37:07 But the population of super long lived flies

00:37:11 was physically sitting in a spare room

00:37:14 at an Ayn Rand elementary school in Southern California.

00:37:18 So that was just like,

00:37:19 well, if I saw this in a movie, I wouldn’t believe it.

00:37:23 Well, yeah, the universe has a sense of humor

00:37:26 in that kind of way.

00:37:26 That fits in, humor fits in somehow

00:37:28 into this whole absurd existence.

00:37:30 But you mentioned the balance between selfishness

00:37:33 and altruism as kind of being innate.

00:37:37 Do you think it’s possible

00:37:38 that’s kind of an emergent phenomena,

00:37:42 those peculiarities of our value system?

00:37:45 How much of it is innate?

00:37:47 How much of it is something we collectively

00:37:49 kind of like a Dostoevsky novel

00:37:52 bring to life together as a civilization?

00:37:54 I mean, the answer to nature versus nurture

00:37:57 is usually both.

00:37:58 And of course it’s nature versus nurture

00:38:01 versus self organization, as you mentioned.

00:38:04 So clearly there are evolutionary roots

00:38:08 to individual and group selection

00:38:11 leading to a mix of selfishness and altruism.

00:38:13 On the other hand,

00:38:15 different cultures manifest that in different ways.

00:38:19 Well, we all have basically the same biology.

00:38:22 And if you look at sort of precivilized cultures,

00:38:26 you have tribes like the Yanomamo in Venezuela,

00:38:29 which their culture is focused on killing other tribes.

00:38:35 And you have other Stone Age tribes

00:38:37 that are mostly peaceful and have big taboos

00:38:40 against violence.

00:38:41 So you can certainly have a big difference

00:38:43 in how culture manifests

00:38:46 these innate biological characteristics,

00:38:50 but still, there’s probably limits

00:38:54 that are given by our biology.

00:38:56 I used to argue this with my great grandparents

00:39:00 who were Marxists actually,

00:39:01 because they believed in the withering away of the state.

00:39:04 Like they believe that,

00:39:06 as you move from capitalism to socialism to communism,

00:39:10 people would just become more social minded

00:39:13 so that a state would be unnecessary

00:39:15 and everyone would give everyone else what they needed.

00:39:20 Now, setting aside that

00:39:23 that’s not what the various Marxist experiments

00:39:25 on the planet seem to be heading toward in practice.

00:39:29 Just as a theoretical point,

00:39:32 I was very dubious that human nature could go there.

00:39:37 Like at that time when my great grandparents are alive,

00:39:39 I was just like, you know, I’m a cynical teenager.

00:39:43 I think humans are just jerks.

00:39:45 The state is not gonna wither away.

00:39:48 If you don’t have some structure

00:39:49 keeping people from screwing each other over,

00:39:51 they’re gonna do it.

00:39:52 So now I actually don’t quite see things that way.

00:39:56 I mean, I think my feeling now subjectively

00:39:59 is the culture aspect is more significant

00:40:02 than I thought it was when I was a teenager.

00:40:04 And I think you could have a human society

00:40:08 that was dialed dramatically further toward,

00:40:11 you know, self awareness, other awareness,

00:40:13 compassion and sharing than our current society.

00:40:16 And of course, greater material abundance helps,

00:40:20 but to some extent material abundance

00:40:23 is a subjective perception also

00:40:25 because many Stone Age cultures perceive themselves

00:40:28 as living in great material abundance

00:40:30 that they had all the food and water they wanted,

00:40:32 they lived in a beautiful place,

00:40:33 that they had sex lives, that they had children.

00:40:37 I mean, they had abundance without any factories, right?

00:40:42 So I think humanity probably would be capable

00:40:46 of fundamentally more positive and joy filled mode

00:40:51 of social existence than what we have now.

00:40:57 Clearly Marx didn’t quite have the right idea

00:40:59 about how to get there.

00:41:01 I mean, he missed a number of key aspects

00:41:05 of human society and its evolution.

00:41:09 And if we look at where we are in society now,

00:41:13 how to get there is a quite different question

00:41:15 because there are very powerful forces

00:41:18 pushing people in different directions

00:41:21 than a positive, joyous, compassionate existence, right?

00:41:26 So if we were tried to, you know,

00:41:28 Elon Musk is dreams of colonizing Mars at the moment,

00:41:32 so we maybe will have a chance to start a new civilization

00:41:36 with a new governmental system.

00:41:38 And certainly there’s quite a bit of chaos.

00:41:41 We’re sitting now, I don’t know what the date is,

00:41:44 but this is June.

00:41:46 There’s quite a bit of chaos in all different forms

00:41:49 going on in the United States and all over the world.

00:41:52 So there’s a hunger for new types of governments,

00:41:55 new types of leadership, new types of systems.

00:41:59 And so what are the forces at play

00:42:01 and how do we move forward?

00:42:04 Yeah, I mean, colonizing Mars, first of all,

00:42:06 it’s a super cool thing to do.

00:42:08 We should be doing it.

00:42:10 So you love the idea.

00:42:11 Yeah, I mean, it’s more important than making

00:42:14 chocolatey or chocolates and sexier lingerie

00:42:18 and many of the things that we spend

00:42:21 a lot more resources on as a species, right?

00:42:24 So I mean, we certainly should do it.

00:42:26 I think the possible futures in which a Mars colony

00:42:33 makes a critical difference for humanity are very few.

00:42:38 I mean, I think, I mean, assuming we make a Mars colony

00:42:42 and people go live there in a couple of decades,

00:42:44 I mean, their supplies are gonna come from Earth.

00:42:46 The money to make the colony came from Earth

00:42:48 and whatever powers are supplying the goods there

00:42:53 from Earth are gonna, in effect, be in control

00:42:56 of that Mars colony.

00:42:58 Of course, there are outlier situations

00:43:02 where Earth gets nuked into oblivion

00:43:06 and somehow Mars has been made self sustaining by that point

00:43:10 and then Mars is what allows humanity to persist.

00:43:14 But I think that those are very, very, very unlikely.

00:43:19 You don’t think it could be a first step on a long journey?

00:43:23 Of course it’s a first step on a long journey,

00:43:24 which is awesome.

00:43:27 I’m guessing the colonization of the rest

00:43:30 of the physical universe will probably be done

00:43:33 by AGI’s that are better designed to live in space

00:43:38 than by the meat machines that we are.

00:43:41 But I mean, who knows?

00:43:43 We may cryopreserve ourselves in some superior way

00:43:45 to what we know now and like shoot ourselves out

00:43:48 to Alpha Centauri and beyond.

00:43:50 I mean, that’s all cool.

00:43:52 It’s very interesting and it’s much more valuable

00:43:55 than most things that humanity is spending its resources on.

00:43:58 On the other hand, with AGI, we can get to a singularity

00:44:03 before the Mars colony becomes sustaining for sure,

00:44:07 possibly before it’s even operational.

00:44:10 So your intuition is that that’s the problem

00:44:12 if we really invest resources and we can get to faster

00:44:14 than a legitimate full self sustaining colonization of Mars.

00:44:19 Yeah, and it’s very clear that we will to me

00:44:23 because there’s so much economic value

00:44:26 in getting from narrow AI toward AGI,

00:44:29 whereas the Mars colony, there’s less economic value

00:44:33 until you get quite far out into the future.

00:44:37 So I think that’s very interesting.

00:44:40 I just think it’s somewhat off to the side.

00:44:44 I mean, just as I think, say, art and music

00:44:48 are very, very interesting and I wanna see resources

00:44:51 go into amazing art and music being created.

00:44:55 And I’d rather see that than a lot of the garbage

00:44:59 that the society spends their money on.

00:45:01 On the other hand, I don’t think Mars colonization

00:45:04 or inventing amazing new genres of music

00:45:07 is not one of the things that is most likely

00:45:11 to make a critical difference in the evolution

00:45:13 of human or nonhuman life in this part of the universe

00:45:18 over the next decade.

00:45:19 Do you think AGI is really?

00:45:21 AGI is by far the most important thing

00:45:25 that’s on the horizon.

00:45:27 And then technologies that have direct ability

00:45:31 to enable AGI or to accelerate AGI are also very important.

00:45:37 For example, say, quantum computing.

00:45:40 I don’t think that’s critical to achieve AGI,

00:45:42 but certainly you could see how

00:45:44 the right quantum computing architecture

00:45:46 could massively accelerate AGI,

00:45:49 similar other types of nanotechnology.

00:45:52 Right now, the quest to cure aging and end disease

00:45:57 while not in the big picture as important as AGI,

00:46:02 of course, it’s important to all of us as individual humans.

00:46:07 And if someone made a super longevity pill

00:46:11 and distributed it tomorrow, I mean,

00:46:14 that would be huge and a much larger impact

00:46:17 than a Mars colony is gonna have for quite some time.

00:46:20 But perhaps not as much as an AGI system.

00:46:23 No, because if you can make a benevolent AGI,

00:46:27 then all the other problems are solved.

00:46:28 I mean, if then the AGI can be,

00:46:31 once it’s as generally intelligent as humans,

00:46:34 it can rapidly become massively more generally intelligent

00:46:37 than humans.

00:46:38 And then that AGI should be able to solve science

00:46:42 and engineering problems much better than human beings,

00:46:46 as long as it is in fact motivated to do so.

00:46:49 That’s why I said a benevolent AGI.

00:46:52 There could be other kinds.

00:46:54 Maybe it’s good to step back a little bit.

00:46:56 I mean, we’ve been using the term AGI.

00:46:58 People often cite you as the creator,

00:47:00 or at least the popularizer of the term AGI,

00:47:03 artificial general intelligence.

00:47:05 Can you tell the origin story of the term maybe?

00:47:09 So yeah, I would say I launched the term AGI upon the world

00:47:14 for what it’s worth without ever fully being in love

00:47:19 with the term.

00:47:21 What happened is I was editing a book,

00:47:25 and this process started around 2001 or two.

00:47:27 I think the book came out 2005, finally.

00:47:30 I was editing a book which I provisionally

00:47:33 was titling Real AI.

00:47:35 And I mean, the goal was to gather together

00:47:38 fairly serious academicish papers

00:47:41 on the topic of making thinking machines

00:47:43 that could really think in the sense like people can,

00:47:46 or even more broadly than people can, right?

00:47:49 So then I was reaching out to other folks

00:47:52 that I had encountered here or there

00:47:54 who were interested in that,

00:47:57 which included some other folks who I knew

00:48:01 from the transhumist and singularitarian world,

00:48:04 like Peter Vos, who has a company, AGI Incorporated,

00:48:07 still in California, and included Shane Legge,

00:48:13 who had worked for me at my company, WebMind,

00:48:15 in New York in the late 90s,

00:48:17 who by now has become rich and famous.

00:48:20 He was one of the cofounders of Google DeepMind.

00:48:22 But at that time, Shane was,

00:48:25 I think he may have just started doing his PhD

00:48:31 with Marcus Hooter, who at that time

00:48:35 hadn’t yet published his book, Universal AI,

00:48:38 which sort of gives a mathematical foundation

00:48:41 for artificial general intelligence.

00:48:43 So I reached out to Shane and Marcus and Peter Vos

00:48:46 and Pei Wang, who was another former employee of mine

00:48:49 who had been Douglas Hofstadter’s PhD student

00:48:51 who had his own approach to AGI,

00:48:53 and a bunch of some Russian folks reached out to these guys

00:48:58 and they contributed papers for the book.

00:49:01 But that was my provisional title, but I never loved it

00:49:04 because in the end, I was doing some,

00:49:09 what we would now call narrow AI as well,

00:49:12 like applying machine learning to genomics data

00:49:14 or chat data for sentiment analysis.

00:49:17 I mean, that work is real.

00:49:19 And in a sense, it’s really AI.

00:49:22 It’s just a different kind of AI.

00:49:26 Ray Kurzweil wrote about narrow AI versus strong AI,

00:49:31 but that seemed weird to me because first of all,

00:49:35 narrow and strong are not antennas.

00:49:36 That’s right.

00:49:38 But secondly, strong AI was used

00:49:41 in the cognitive science literature

00:49:43 to mean the hypothesis that digital computer AIs

00:49:46 could have true consciousness like human beings.

00:49:50 So there was already a meaning to strong AI,

00:49:52 which was complexly different, but related, right?

00:49:56 So we were tossing around on an email list

00:50:00 whether what title it should be.

00:50:03 And so we talked about narrow AI, broad AI, wide AI,

00:50:07 narrow AI, general AI.

00:50:09 And I think it was either Shane Legge or Peter Vos

00:50:15 on the private email discussion we had.

00:50:18 He said, but why don’t we go

00:50:18 with AGI, artificial general intelligence?

00:50:21 And Pei Wang wanted to do GAI,

00:50:24 general artificial intelligence,

00:50:25 because in Chinese it goes in that order.

00:50:27 But we figured gay wouldn’t work

00:50:30 in US culture at that time, right?

00:50:33 So we went with the AGI.

00:50:37 We used it for the title of that book.

00:50:39 And part of Peter and Shane’s reasoning

00:50:43 was you have the G factor in psychology,

00:50:45 which is IQ, general intelligence, right?

00:50:47 So you have a meaning of GI, general intelligence,

00:50:51 in psychology, so then you’re looking like artificial GI.

00:50:55 So then we use that for the title of the book.

00:51:00 And so I think maybe both Shane and Peter

00:51:04 think they invented the term,

00:51:05 but then later after the book was published,

00:51:08 this guy, Mark Guberd, came up to me and he’s like,

00:51:11 well, I published an essay with the term AGI

00:51:14 in like 1997 or something.

00:51:17 And so I’m just waiting for some Russian to come out

00:51:20 and say they published that in 1953, right?

00:51:23 I mean, that term is not dramatically innovative

00:51:27 or anything.

00:51:28 It’s one of these obvious in hindsight things,

00:51:31 which is also annoying in a way,

00:51:34 because Joshua Bach, who you interviewed,

00:51:39 is a close friend of mine.

00:51:40 He likes the term synthetic intelligence,

00:51:43 which I like much better,

00:51:44 but it hasn’t actually caught on, right?

00:51:47 Because I mean, artificial is a bit off to me

00:51:51 because artifice is like a tool or something,

00:51:54 but not all AGI’s are gonna be tools.

00:51:57 I mean, they may be now,

00:51:58 but we’re aiming toward making them agents

00:52:00 rather than tools.

00:52:02 And in a way, I don’t like the distinction

00:52:04 between artificial and natural,

00:52:07 because I mean, we’re part of nature also

00:52:09 and machines are part of nature.

00:52:12 I mean, you can look at evolved versus engineered,

00:52:14 but that’s a different distinction.

00:52:17 Then it should be engineered general intelligence, right?

00:52:20 And then general, well,

00:52:21 if you look at Marcus Hooter’s book,

00:52:24 universally, what he argues there is,

00:52:28 within the domain of computation theory,

00:52:30 which is limited, but interesting.

00:52:31 So if you assume computable environments

00:52:33 or computable reward functions,

00:52:35 then he articulates what would be

00:52:37 a truly general intelligence,

00:52:40 a system called AIXI, which is quite beautiful.

00:52:43 AIXI, and that’s the middle name

00:52:46 of my latest child, actually, is it?

00:52:49 What’s the first name?

00:52:50 First name is QORXI, Q O R X I,

00:52:52 which my wife came up with,

00:52:53 but that’s an acronym for quantum organized rational

00:52:57 expanding intelligence, and his middle name is Xiphonies,

00:53:03 actually, which means the former principal underlying AIXI.

00:53:08 But in any case.

00:53:09 You’re giving Elon Musk’s new child a run for his money.

00:53:12 Well, I did it first.

00:53:13 He copied me with this new freakish name,

00:53:17 but now if I have another baby,

00:53:18 I’m gonna have to outdo him.

00:53:20 It’s becoming an arms race of weird, geeky baby names.

00:53:24 We’ll see what the babies think about it, right?

00:53:26 But I mean, my oldest son, Zarathustra, loves his name,

00:53:30 and my daughter, Sharazad, loves her name.

00:53:33 So far, basically, if you give your kids weird names.

00:53:36 They live up to it.

00:53:37 Well, you’re obliged to make the kids weird enough

00:53:39 that they like the names, right?

00:53:42 It directs their upbringing in a certain way.

00:53:43 But yeah, anyway, I mean, what Marcus showed in that book

00:53:47 is that a truly general intelligence

00:53:50 theoretically is possible,

00:53:51 but would take infinite computing power.

00:53:53 So then the artificial is a little off.

00:53:56 The general is not really achievable within physics

00:53:59 as we know it.

00:54:01 And I mean, physics as we know it may be limited,

00:54:03 but that’s what we have to work with now.

00:54:05 Intelligence.

00:54:06 Infinitely general, you mean,

00:54:07 like information processing perspective, yeah.

00:54:10 Yeah, intelligence is not very well defined either, right?

00:54:14 I mean, what does it mean?

00:54:16 I mean, in AI now, it’s fashionable to look at it

00:54:19 as maximizing an expected reward over the future.

00:54:23 But that sort of definition is pathological in various ways.

00:54:27 And my friend David Weinbaum, AKA Weaver,

00:54:31 he had a beautiful PhD thesis on open ended intelligence,

00:54:34 trying to conceive intelligence in a…

00:54:36 Without a reward.

00:54:38 Yeah, he’s just looking at it differently.

00:54:40 He’s looking at complex self organizing systems

00:54:42 and looking at an intelligent system

00:54:44 as being one that revises and grows

00:54:47 and improves itself in conjunction with its environment

00:54:51 without necessarily there being one objective function

00:54:54 it’s trying to maximize.

00:54:56 Although over certain intervals of time,

00:54:58 it may act as if it’s optimizing

00:54:59 a certain objective function.

00:55:01 Very much Solaris from Stanislav Lem’s novels, right?

00:55:04 So yeah, the point is artificial, general and intelligence.

00:55:07 Don’t work.

00:55:08 They’re all bad.

00:55:09 On the other hand, everyone knows what AI is.

00:55:12 And AGI seems immediately comprehensible

00:55:15 to people with a technical background.

00:55:17 So I think that the term has served

00:55:19 as sociological function.

00:55:20 And now it’s out there everywhere, which baffles me.

00:55:24 It’s like KFC.

00:55:25 I mean, that’s it.

00:55:27 We’re stuck with AGI probably for a very long time

00:55:30 until AGI systems take over and rename themselves.

00:55:33 Yeah.

00:55:34 And then we’ll be biological.

00:55:36 We’re stuck with GPUs too,

00:55:37 which mostly have nothing to do with graphics.

00:55:39 Any more, right?

00:55:40 I wonder what the AGI system will call us humans.

00:55:43 That was maybe.

00:55:44 Grandpa.

00:55:45 Yeah.

00:55:45 Yeah.

00:55:46 GPs.

00:55:47 Yeah.

00:55:48 Grandpa processing unit, yeah.

00:55:50 Biological grandpa processing units.

00:55:52 Yeah.

00:55:54 Okay, so maybe also just a comment on AGI representing

00:56:00 before even the term existed,

00:56:02 representing a kind of community.

00:56:04 You’ve talked about this in the past,

00:56:06 sort of AI is coming in waves,

00:56:08 but there’s always been this community of people

00:56:10 who dream about creating general human level

00:56:15 super intelligence systems.

00:56:19 Can you maybe give your sense of the history

00:56:21 of this community as it exists today,

00:56:24 as it existed before this deep learning revolution

00:56:26 all throughout the winters and the summers of AI?

00:56:29 Sure.

00:56:30 First, I would say as a side point,

00:56:33 the winters and summers of AI are greatly exaggerated

00:56:37 by Americans and in that,

00:56:40 if you look at the publication record

00:56:43 of the artificial intelligence community

00:56:46 since say the 1950s,

00:56:48 you would find a pretty steady growth

00:56:51 in advance of ideas and papers.

00:56:53 And what’s thought of as an AI winter or summer

00:56:57 was sort of how much money is the US military

00:57:00 pumping into AI, which was meaningful.

00:57:04 On the other hand, there was AI going on in Germany,

00:57:06 UK and in Japan and in Russia, all over the place,

00:57:10 while US military got more and less enthused about AI.

00:57:16 So, I mean.

00:57:17 That happened to be, just for people who don’t know,

00:57:20 the US military happened to be the main source

00:57:22 of funding for AI research.

00:57:24 So another way to phrase that is it’s up and down

00:57:27 of funding for artificial intelligence research.

00:57:31 And I would say the correlation between funding

00:57:34 and intellectual advance was not 100%, right?

00:57:38 Because I mean, in Russia, as an example, or in Germany,

00:57:42 there was less dollar funding than in the US,

00:57:44 but many foundational ideas were laid out,

00:57:48 but it was more theory than implementation, right?

00:57:50 And US really excelled at sort of breaking through

00:57:54 from theoretical papers to working implementations,

00:58:00 which did go up and down somewhat

00:58:03 with US military funding,

00:58:04 but still, I mean, you can look in the 1980s,

00:58:07 Dietrich Derner in Germany had self driving cars

00:58:10 on the Autobahn, right?

00:58:11 And I mean, it was a little early

00:58:15 with regard to the car industry,

00:58:16 so it didn’t catch on such as has happened now.

00:58:20 But I mean, that whole advancement

00:58:22 of self driving car technology in Germany

00:58:25 was pretty much independent of AI military summers

00:58:29 and winters in the US.

00:58:31 So there’s been more going on in AI globally

00:58:34 than not only most people on the planet realize,

00:58:37 but then most new AI PhDs realize

00:58:40 because they’ve come up within a certain sub field of AI

00:58:44 and haven’t had to look so much beyond that.

00:58:47 But I would say when I got my PhD in 1989 in mathematics,

00:58:54 I was interested in AI already.

00:58:56 In Philadelphia.

00:58:56 Yeah, I started at NYU, then I transferred to Philadelphia

00:59:00 to Temple University, good old North Philly.

00:59:03 North Philly.

00:59:04 Yeah, yeah, yeah, the pearl of the US.

00:59:09 You never stopped at a red light then

00:59:10 because you were afraid if you stopped at a red light,

00:59:12 someone will carjack you.

00:59:13 So you just drive through every red light.

00:59:15 Yeah.

00:59:18 Every day driving or bicycling to Temple from my house

00:59:20 was like a new adventure.

00:59:24 But yeah, the reason I didn’t do a PhD in AI

00:59:27 was what people were doing in the academic AI field then,

00:59:30 was just astoundingly boring and seemed wrong headed to me.

00:59:34 It was really like rule based expert systems

00:59:38 and production systems.

00:59:39 And actually I loved mathematical logic.

00:59:42 I had nothing against logic as the cognitive engine for an AI,

00:59:45 but the idea that you could type in the knowledge

00:59:48 that AI would need to think seemed just completely stupid

00:59:52 and wrong headed to me.

00:59:55 I mean, you can use logic if you want,

00:59:57 but somehow the system has got to be…

01:00:00 Automated.

01:00:01 Learning, right?

01:00:01 It should be learning from experience.

01:00:03 And the AI field then was not interested

01:00:06 in learning from experience.

01:00:08 I mean, some researchers certainly were.

01:00:11 I mean, I remember in mid eighties,

01:00:13 I discovered a book by John Andreas,

01:00:17 which was, it was about a reinforcement learning system

01:00:21 called PURRDASHPUSS, which was an acronym

01:00:27 that I can’t even remember what it was for,

01:00:28 but purpose anyway.

01:00:30 But he, I mean, that was a system

01:00:32 that was supposed to be an AGI

01:00:34 and basically by some sort of fancy

01:00:38 like Markov decision process learning,

01:00:41 it was supposed to learn everything

01:00:43 just from the bits coming into it

01:00:44 and learn to maximize its reward

01:00:46 and become intelligent, right?

01:00:49 So that was there in academia back then,

01:00:51 but it was like isolated, scattered, weird people.

01:00:55 But all these isolated, scattered, weird people

01:00:57 in that period, I mean, they laid the intellectual grounds

01:01:01 for what happened later.

01:01:02 So you look at John Andreas at University of Canterbury

01:01:05 with his PURRDASHPUSS reinforcement learning Markov system.

01:01:09 He was the PhD supervisor for John Cleary in New Zealand.

01:01:14 Now, John Cleary worked with me

01:01:17 when I was at Waikato University in 1993 in New Zealand.

01:01:21 And he worked with Ian Whitten there

01:01:23 and they launched WEKA,

01:01:25 which was the first open source machine learning toolkit,

01:01:29 which was launched in, I guess, 93 or 94

01:01:33 when I was at Waikato University.

01:01:35 Written in Java, unfortunately.

01:01:36 Written in Java, which was a cool language back then.

01:01:39 I guess it’s still, well, it’s not cool anymore,

01:01:41 but it’s powerful.

01:01:43 I find, like most programmers now,

01:01:45 I find Java unnecessarily bloated,

01:01:48 but back then it was like Java or C++ basically.

01:01:52 And Java was easier for students.

01:01:55 Amusingly, a lot of the work on WEKA

01:01:57 when we were in New Zealand was funded by a US,

01:02:01 sorry, a New Zealand government grant

01:02:03 to use machine learning

01:02:05 to predict the menstrual cycles of cows.

01:02:08 So in the US, all the grant funding for AI

01:02:10 was about how to kill people or spy on people.

01:02:13 In New Zealand, it’s all about cows or kiwi fruits, right?

01:02:16 Yeah.

01:02:17 So yeah, anyway, I mean, John Andreas

01:02:20 had his probability theory based reinforcement learning,

01:02:24 proto AGI.

01:02:25 John Cleary was trying to do much more ambitious,

01:02:29 probabilistic AGI systems.

01:02:31 Now, John Cleary helped do WEKA,

01:02:36 which is the first open source machine learning toolkit.

01:02:39 So the predecessor for TensorFlow and Torch

01:02:41 and all these things.

01:02:43 Also, Shane Legg was at Waikato

01:02:46 working with John Cleary and Ian Witten

01:02:50 and this whole group.

01:02:51 And then working with my own companies,

01:02:55 my company, WebMind, an AI company I had in the late 90s

01:02:59 with a team there at Waikato University,

01:03:02 which is how Shane got his head full of AGI,

01:03:05 which led him to go on

01:03:06 and with Demis Hassabis found DeepMind.

01:03:08 So what you can see through that lineage is,

01:03:11 you know, in the 80s and 70s,

01:03:12 John Andreas was trying to build probabilistic

01:03:14 reinforcement learning AGI systems.

01:03:17 The technology, the computers just weren’t there to support

01:03:19 his ideas were very similar to what people are doing now.

01:03:23 But, you know, although he’s long since passed away

01:03:27 and didn’t become that famous outside of Canterbury,

01:03:30 I mean, the lineage of ideas passed on from him

01:03:33 to his students, to their students,

01:03:35 you can go trace directly from there to me

01:03:37 and to DeepMind, right?

01:03:39 So that there was a lot going on in AGI

01:03:42 that did ultimately lay the groundwork

01:03:46 for what we have today, but there wasn’t a community, right?

01:03:48 And so when I started trying to pull together

01:03:53 an AGI community, it was in the, I guess,

01:03:56 the early aughts when I was living in Washington, D.C.

01:04:00 and making a living doing AI consulting

01:04:03 for various U.S. government agencies.

01:04:07 And I organized the first AGI workshop in 2006.

01:04:13 And I mean, it wasn’t like it was literally

01:04:15 in my basement or something.

01:04:17 I mean, it was in the conference room at the Marriott

01:04:19 in Bethesda, it’s not that edgy or underground,

01:04:23 unfortunately, but still.

01:04:25 How many people attended?

01:04:25 About 60 or something.

01:04:27 That’s not bad.

01:04:28 I mean, D.C. has a lot of AI going on,

01:04:30 probably until the last five or 10 years,

01:04:34 much more than Silicon Valley, although it’s just quiet

01:04:37 because of the nature of what happens in D.C.

01:04:41 Their business isn’t driven by PR.

01:04:43 Mostly when something starts to work really well,

01:04:46 it’s taken black and becomes even more quiet, right?

01:04:49 But yeah, the thing is that really had the feeling

01:04:52 of a group of starry eyed mavericks huddled in a basement,

01:04:58 like plotting how to overthrow the narrow AI establishment.

01:05:02 And for the first time, in some cases,

01:05:05 coming together with others who shared their passion

01:05:08 for AGI and the technical seriousness about working on it.

01:05:13 And that’s very, very different than what we have today.

01:05:19 I mean, now it’s a little bit different.

01:05:22 We have AGI conference every year

01:05:24 and there’s several hundred people rather than 50.

01:05:29 Now it’s more like this is the main gathering

01:05:32 of people who want to achieve AGI

01:05:35 and think that large scale nonlinear regression

01:05:39 is not the golden path to AGI.

01:05:42 So I mean it’s…

01:05:43 AKA neural networks.

01:05:44 Yeah, yeah, yeah.

01:05:44 Well, certain architectures for learning using neural networks.

01:05:51 So yeah, the AGI conferences are sort of now

01:05:54 the main concentration of people not obsessed

01:05:57 with deep neural nets and deep reinforcement learning,

01:06:00 but still interested in AGI, not the only ones.

01:06:06 I mean, there’s other little conferences and groupings

01:06:10 interested in human level AI

01:06:13 and cognitive architectures and so forth.

01:06:16 But yeah, it’s been a big shift.

01:06:17 Like back then, you couldn’t really…

01:06:21 It’ll be very, very edgy then

01:06:23 to give a university department seminar

01:06:26 that mentioned AGI or human level AI.

01:06:28 It was more like you had to talk about

01:06:30 something more short term and immediately practical

01:06:34 than in the bar after the seminar,

01:06:36 you could bullshit about AGI in the same breath

01:06:39 as time travel or the simulation hypothesis or something.

01:06:44 Whereas now, AGI is not only in the academic seminar room,

01:06:48 like you have Vladimir Putin knows what AGI is.

01:06:51 And he’s like, Russia needs to become the leader in AGI.

01:06:55 So national leaders and CEOs of large corporations.

01:07:01 I mean, the CTO of Intel, Justin Ratner,

01:07:04 this was years ago, Singularity Summit Conference,

01:07:06 2008 or something.

01:07:07 He’s like, we believe Ray Kurzweil,

01:07:10 the singularity will happen in 2045

01:07:12 and it will have Intel inside.

01:07:13 So, I mean, it’s gone from being something

01:07:18 which is the pursuit of like crazed mavericks,

01:07:21 crackpots and science fiction fanatics

01:07:24 to being a marketing term for large corporations

01:07:30 and the national leaders,

01:07:31 which is a astounding transition.

01:07:35 But yeah, in the course of this transition,

01:07:40 I think a bunch of sub communities have formed

01:07:42 and the community around the AGI conference series

01:07:45 is certainly one of them.

01:07:47 It hasn’t grown as big as I might’ve liked it to.

01:07:51 On the other hand, sometimes a modest size community

01:07:56 can be better for making intellectual progress also.

01:07:59 Like you go to a society for neuroscience conference,

01:08:02 you have 35 or 40,000 neuroscientists.

01:08:05 On the one hand, it’s amazing.

01:08:07 On the other hand, you’re not gonna talk to the leaders

01:08:10 of the field there if you’re an outsider.

01:08:14 Yeah, in the same sense, the AAAI,

01:08:17 the artificial intelligence,

01:08:20 the main kind of generic artificial intelligence

01:08:23 conference is too big.

01:08:26 It’s too amorphous.

01:08:28 Like it doesn’t make sense.

01:08:30 Well, yeah, and NIPS has become a company advertising outlet

01:08:35 in the whole of it.

01:08:37 So, I mean, to comment on the role of AGI

01:08:40 in the research community, I’d still,

01:08:42 if you look at NeurIPS, if you look at CVPR,

01:08:45 if you look at these iClear,

01:08:49 AGI is still seen as the outcast.

01:08:51 I would say in these main machine learning,

01:08:55 in these main artificial intelligence conferences

01:08:59 amongst the researchers,

01:09:00 I don’t know if it’s an accepted term yet.

01:09:03 What I’ve seen bravely, you mentioned Shane Legg’s

01:09:08 DeepMind and then OpenAI are the two places that are,

01:09:13 I would say unapologetically so far,

01:09:15 I think it’s actually changing unfortunately,

01:09:17 but so far they’ve been pushing the idea

01:09:19 that the goal is to create an AGI.

01:09:22 Well, they have billions of dollars behind them.

01:09:24 So, I mean, they’re in the public mind

01:09:27 that certainly carries some oomph, right?

01:09:30 I mean, I mean.

01:09:30 But they also have really strong researchers, right?

01:09:33 They do, they’re great teams.

01:09:34 I mean, DeepMind in particular, yeah.

01:09:36 And they have, I mean, DeepMind has Marcus Hutter

01:09:39 walking around.

01:09:40 I mean, there’s all these folks who basically

01:09:43 their full time position involves dreaming

01:09:46 about creating AGI.

01:09:47 I mean, Google Brain has a lot of amazing

01:09:51 AGI oriented people also.

01:09:53 And I mean, so I’d say from a public marketing view,

01:09:59 DeepMind and OpenAI are the two large well funded

01:10:03 organizations that have put the term and concept AGI

01:10:08 out there sort of as part of their public image.

01:10:12 But I mean, they’re certainly not,

01:10:15 there are other groups that are doing research

01:10:17 that seems just as AGI is to me.

01:10:20 I mean, including a bunch of groups in Google’s

01:10:23 main Mountain View office.

01:10:26 So yeah, it’s true.

01:10:27 AGI is somewhat away from the mainstream now.

01:10:33 But if you compare it to where it was 15 years ago,

01:10:38 there’s been an amazing mainstreaming.

01:10:41 You could say the same thing about super longevity research,

01:10:45 which is one of my application areas that I’m excited about.

01:10:49 I mean, I’ve been talking about this since the 90s,

01:10:52 but working on this since 2001.

01:10:54 And back then, really to say,

01:10:57 you’re trying to create therapies to allow people

01:10:59 to live hundreds of thousands of years,

01:11:02 you were way, way, way, way out of the industry,

01:11:05 academic mainstream.

01:11:06 But now, Google had Project Calico,

01:11:11 Craig Venter had Human Longevity Incorporated.

01:11:14 And then once the suits come marching in, right?

01:11:17 I mean, once there’s big money in it,

01:11:20 then people are forced to take it seriously

01:11:22 because that’s the way modern society works.

01:11:24 So it’s still not as mainstream as cancer research,

01:11:28 just as AGI is not as mainstream

01:11:31 as automated driving or something.

01:11:32 But the degree of mainstreaming that’s happened

01:11:36 in the last 10 to 15 years is astounding

01:11:40 to those of us who’ve been at it for a while.

01:11:42 Yeah, but there’s a marketing aspect to the term,

01:11:45 but in terms of actual full force research

01:11:48 that’s going on under the header of AGI,

01:11:51 it’s currently, I would say dominated,

01:11:54 maybe you can disagree,

01:11:55 dominated by neural networks research,

01:11:57 that the nonlinear regression, as you mentioned.

01:12:02 Like what’s your sense with OpenCog, with your work,

01:12:06 but in general, I was logic based systems

01:12:10 and expert systems.

01:12:12 For me, always seemed to capture a deep element

01:12:18 of intelligence that needs to be there.

01:12:21 Like you said, it needs to learn,

01:12:23 it needs to be automated somehow,

01:12:24 but that seems to be missing from a lot of research currently.

01:12:31 So what’s your sense?

01:12:34 I guess one way to ask this question,

01:12:36 what’s your sense of what kind of things

01:12:39 will an AGI system need to have?

01:12:43 Yeah, that’s a very interesting topic

01:12:45 that I’ve thought about for a long time.

01:12:47 And I think there are many, many different approaches

01:12:53 that can work for getting to human level AI.

01:12:56 So I don’t think there’s like one golden algorithm,

01:13:02 or one golden design that can work.

01:13:05 And I mean, flying machines is the much worn

01:13:10 analogy here, right?

01:13:11 Like, I mean, you have airplanes, you have helicopters,

01:13:13 you have balloons, you have stealth bombers

01:13:17 that don’t look like regular airplanes.

01:13:18 You’ve got all blimps.

01:13:21 Birds too.

01:13:21 Birds, yeah, and bugs, right?

01:13:24 Yeah.

01:13:25 And there are certainly many kinds of flying machines that.

01:13:29 And there’s a catapult that you can just launch.

01:13:32 And there’s bicycle powered like flying machines, right?

01:13:36 Nice, yeah.

01:13:37 Yeah, so now these are all analyzable

01:13:40 by a basic theory of aerodynamics, right?

01:13:43 Now, so one issue with AGI is we don’t yet have the analog

01:13:48 of the theory of aerodynamics.

01:13:50 And that’s what Marcus Hutter was trying to make

01:13:54 with the AXI and his general theory of general intelligence.

01:13:58 But that theory in its most clearly articulated parts

01:14:03 really only works for either infinitely powerful machines

01:14:07 or almost, or insanely impractically powerful machines.

01:14:11 So I mean, if you were gonna take a theory based approach

01:14:14 to AGI, what you would do is say, well, let’s take

01:14:20 what’s called say AXE TL, which is Hutter’s AXE machine

01:14:25 that can work on merely insanely much processing power

01:14:29 rather than infinitely much.

01:14:30 What does TL stand for?

01:14:32 Time and length.

01:14:33 Okay.

01:14:34 So you’re basically how it.

01:14:35 Like constrained somehow.

01:14:36 Yeah, yeah, yeah.

01:14:37 So how AXE works basically is each action

01:14:42 that it wants to take, before taking that action,

01:14:45 it looks at all its history.

01:14:47 And then it looks at all possible programs

01:14:49 that it could use to make a decision.

01:14:51 And it decides like which decision program

01:14:54 would have let it make the best decisions

01:14:56 according to its reward function over its history.

01:14:58 And it uses that decision program

01:15:00 to make the next decision, right?

01:15:02 It’s not afraid of infinite resources.

01:15:04 It’s searching through the space

01:15:06 of all possible computer programs

01:15:08 in between each action and each next action.

01:15:10 Now, AXE TL searches through all possible computer programs

01:15:15 that have runtime less than T and length less than L.

01:15:18 So it’s, which is still an impractically humongous space,

01:15:22 right?

01:15:23 So what you would like to do to make an AGI

01:15:27 and what will probably be done 50 years from now

01:15:29 to make an AGI is say, okay, well, we have some constraints.

01:15:34 We have these processing power constraints

01:15:37 and we have the space and time constraints on the program.

01:15:42 We have energy utilization constraints

01:15:45 and we have this particular class environments,

01:15:48 class of environments that we care about,

01:15:50 which may be say, you know, manipulating physical objects

01:15:54 on the surface of the earth,

01:15:55 communicating in human language.

01:15:57 I mean, whatever our particular, not annihilating humanity,

01:16:02 whatever our particular requirements happen to be.

01:16:05 If you formalize those requirements

01:16:07 in some formal specification language,

01:16:10 you should then be able to run

01:16:13 automated program specializer on AXE TL,

01:16:17 specialize it to the computing resource constraints

01:16:21 and the particular environment and goal.

01:16:23 And then it will spit out like the specialized version

01:16:27 of AXE TL to your resource restrictions

01:16:30 and your environment, which will be your AGI, right?

01:16:32 And that I think is how our super AGI

01:16:36 will create new AGI systems, right?

01:16:38 But that’s a very rush.

01:16:40 It seems really inefficient.

01:16:41 It’s a very Russian approach by the way,

01:16:43 like the whole field of program specialization

01:16:45 came out of Russia.

01:16:47 Can you backtrack?

01:16:48 So what is program specialization?

01:16:49 So it’s basically…

01:16:51 Well, take sorting, for example.

01:16:53 You can have a generic program for sorting lists,

01:16:56 but what if all your lists you care about

01:16:58 are length 10,000 or less?

01:16:59 Got it.

01:17:00 You can run an automated program specializer

01:17:02 on your sorting algorithm,

01:17:04 and it will come up with the algorithm

01:17:05 that’s optimal for sorting lists of length 1,000 or less,

01:17:08 or 10,000 or less, right?

01:17:09 That’s kind of like, isn’t that the kind of the process

01:17:12 of evolution as a program specializer to the environment?

01:17:17 So you’re kind of evolving human beings,

01:17:20 or you’re living creatures.

01:17:21 Your Russian heritage is showing there.

01:17:24 So with Alexander Vityaev and Peter Anokhin and so on,

01:17:28 I mean, there’s a long history

01:17:31 of thinking about evolution that way also, right?

01:17:36 So, well, my point is that what we’re thinking of

01:17:40 as a human level general intelligence,

01:17:44 if you start from narrow AIs,

01:17:46 like are being used in the commercial AI field now,

01:17:50 then you’re thinking,

01:17:51 okay, how do we make it more and more general?

01:17:53 On the other hand,

01:17:54 if you start from AICSI or Schmidhuber’s Gödel machine,

01:17:58 or these infinitely powerful,

01:18:01 but practically infeasible AIs,

01:18:04 then getting to a human level AGI

01:18:06 is a matter of specialization.

01:18:08 It’s like, how do you take these

01:18:10 maximally general learning processes

01:18:12 and how do you specialize them

01:18:15 so that they can operate

01:18:17 within the resource constraints that you have,

01:18:20 but will achieve the particular things that you care about?

01:18:24 Because we humans are not maximally general intelligence.

01:18:28 If I ask you to run a maze in 750 dimensions,

01:18:31 you’d probably be very slow.

01:18:33 Whereas at two dimensions,

01:18:34 you’re probably way better, right?

01:18:37 So, I mean, we’re special because our hippocampus

01:18:40 has a two dimensional map in it, right?

01:18:43 And it does not have a 750 dimensional map in it.

01:18:46 So, I mean, we’re a peculiar mix

01:18:51 of generality and specialization, right?

01:18:56 We’ll probably start quite general at birth.

01:18:59 Not obviously still narrow,

01:19:00 but like more general than we are

01:19:03 at age 20 and 30 and 40 and 50 and 60.

01:19:07 I don’t think that, I think it’s more complex than that

01:19:10 because I mean, in some sense,

01:19:13 a young child is less biased

01:19:17 and the brain has yet to sort of crystallize

01:19:20 into appropriate structures

01:19:22 for processing aspects of the physical and social world.

01:19:25 On the other hand,

01:19:26 the young child is very tied to their sensorium.

01:19:30 Whereas we can deal with abstract mathematics,

01:19:33 like 750 dimensions and the young child cannot

01:19:37 because they haven’t grown what Piaget

01:19:40 called the formal capabilities.

01:19:44 They haven’t learned to abstract yet, right?

01:19:46 And the ability to abstract

01:19:48 gives you a different kind of generality

01:19:49 than what the baby has.

01:19:51 So, there’s both more specialization

01:19:55 and more generalization that comes

01:19:57 with the development process actually.

01:19:59 I mean, I guess just the trajectories

01:20:02 of the specialization are most controllable

01:20:06 at the young age, I guess is one way to put it.

01:20:09 Do you have kids?

01:20:10 No.

01:20:11 They’re not as controllable as you think.

01:20:13 So, you think it’s interesting.

01:20:15 I think, honestly, I think a human adult

01:20:19 is much more generally intelligent than a human baby.

01:20:23 Babies are very stupid, you know what I mean?

01:20:25 I mean, they’re cute, which is why we put up

01:20:29 with their repetitiveness and stupidity.

01:20:33 And they have what the Zen guys would call

01:20:35 a beginner’s mind, which is a beautiful thing,

01:20:38 but that doesn’t necessarily correlate

01:20:40 with a high level of intelligence.

01:20:43 On the plot of cuteness and stupidity,

01:20:46 there’s a process that allows us to put up

01:20:48 with their stupidity as they become more intelligent.

01:20:50 So, by the time you’re an ugly old man like me,

01:20:52 you gotta get really, really smart to compensate.

01:20:54 To compensate, okay, cool.

01:20:56 But yeah, going back to your original question,

01:20:59 so the way I look at human level AGI

01:21:05 is how do you specialize, you know,

01:21:08 unrealistically inefficient, superhuman,

01:21:12 brute force learning processes

01:21:14 to the specific goals that humans need to achieve

01:21:18 and the specific resources that we have.

01:21:21 And both of these, the goals and the resources

01:21:24 and the environments, I mean, all this is important.

01:21:27 And on the resources side, it’s important

01:21:31 that the hardware resources we’re bringing to bear

01:21:35 are very different than the human brain.

01:21:38 So the way I would want to implement AGI

01:21:42 on a bunch of neurons in a vat

01:21:45 that I could rewire arbitrarily is quite different

01:21:48 than the way I would want to create AGI

01:21:51 on say a modern server farm of CPUs and GPUs,

01:21:55 which in turn may be quite different

01:21:57 than the way I would want to implement AGI

01:22:00 on whatever quantum computer we’ll have in 10 years,

01:22:03 supposing someone makes a robust quantum turing machine

01:22:06 or something, right?

01:22:08 So I think there’s been coevolution

01:22:12 of the patterns of organization in the human brain

01:22:16 and the physiological particulars

01:22:19 of the human brain over time.

01:22:23 And when you look at neural networks,

01:22:25 that is one powerful class of learning algorithms,

01:22:28 but it’s also a class of learning algorithms

01:22:30 that evolve to exploit the particulars of the human brain

01:22:33 as a computational substrate.

01:22:36 If you’re looking at the computational substrate

01:22:38 of a modern server farm,

01:22:41 you won’t necessarily want the same algorithms

01:22:43 that you want on the human brain.

01:22:45 And from the right level of abstraction,

01:22:48 you could look at maybe the best algorithms on the brain

01:22:51 and the best algorithms on a modern computer network

01:22:54 as implementing the same abstract learning

01:22:56 and representation processes,

01:22:59 but finding that level of abstraction

01:23:01 is its own AGI research project then, right?

01:23:04 So that’s about the hardware side

01:23:07 and the software side, which follows from that.

01:23:10 Then regarding what are the requirements,

01:23:14 I wrote the paper years ago

01:23:16 on what I called the embodied communication prior,

01:23:20 which was quite similar in intent

01:23:22 to Yoshua Bengio’s recent paper on the consciousness prior,

01:23:26 except I didn’t wanna wrap up consciousness in it

01:23:30 because to me, the qualia problem and subjective experience

01:23:34 is a very interesting issue also,

01:23:35 which we can chat about,

01:23:37 but I would rather keep that philosophical debate distinct

01:23:43 from the debate of what kind of biases

01:23:45 do you wanna put in a general intelligence

01:23:47 to give it human like general intelligence.

01:23:49 And I’m not sure Yoshua Bengio is really addressing

01:23:53 that kind of consciousness.

01:23:55 He’s just using the term.

01:23:56 I love Yoshua to pieces.

01:23:58 Like he’s by far my favorite of the lines of deep learning.

01:24:02 Yeah.

01:24:03 He’s such a good hearted guy.

01:24:05 He’s a good human being.

01:24:07 Yeah, for sure.

01:24:07 I am not sure he has plumbed to the depths

01:24:11 of the philosophy of consciousness.

01:24:13 No, he’s using it as a sexy term.

01:24:15 Yeah, yeah, yeah.

01:24:15 So what I called it was the embodied communication prior.

01:24:21 Can you maybe explain it a little bit?

01:24:22 Yeah, yeah.

01:24:23 What I meant was, what are we humans evolved for?

01:24:26 You can say being human, but that’s very abstract, right?

01:24:29 I mean, our minds control individual bodies,

01:24:32 which are autonomous agents moving around in a world

01:24:36 that’s composed largely of solid objects, right?

01:24:41 And we’ve also evolved to communicate via language

01:24:46 with other solid object agents that are going around

01:24:49 doing things collectively with us

01:24:52 in a world of solid objects.

01:24:54 And these things are very obvious,

01:24:56 but if you compare them to the scope

01:24:58 of all possible intelligences

01:25:01 or even all possible intelligences

01:25:03 that are physically realizable,

01:25:05 that actually constrains things a lot.

01:25:07 So if you start to look at how would you realize

01:25:13 some specialized or constrained version

01:25:15 of universal general intelligence

01:25:18 in a system that has limited memory

01:25:21 and limited speed of processing,

01:25:23 but whose general intelligence will be biased

01:25:26 toward controlling a solid object agent,

01:25:28 which is mobile in a solid object world

01:25:31 for manipulating solid objects

01:25:33 and communicating via language with other similar agents

01:25:38 in that same world, right?

01:25:39 Then starting from that,

01:25:41 you’re starting to get a requirements analysis

01:25:43 for human level general intelligence.

01:25:48 And then that leads you into cognitive science

01:25:50 and you can look at, say, what are the different types

01:25:53 of memory that the human mind and brain has?

01:25:56 And this has matured over the last decades

01:26:00 and I got into this a lot.

01:26:02 So after getting my PhD in math,

01:26:04 I was an academic for eight years.

01:26:06 I was in departments of mathematics,

01:26:08 computer science, and psychology.

01:26:11 When I was in the psychology department

01:26:12 at the University of Western Australia,

01:26:14 I was focused on cognitive science of memory and perception.

01:26:18 Actually, I was teaching neural nets and deep neural nets

01:26:21 and it was multi layer perceptrons, right?

01:26:23 Psychology?

01:26:24 Yeah.

01:26:25 Cognitive science, it was cross disciplinary

01:26:27 among engineering, math, psychology, philosophy,

01:26:31 linguistics, computer science.

01:26:33 But yeah, we were teaching psychology students

01:26:35 to try to model the data from human cognition experiments

01:26:40 using multi layer perceptrons,

01:26:42 which was the early version of a deep neural network.

01:26:45 Very, very, yeah, recurrent back prop

01:26:47 was very, very slow to train back then, right?

01:26:51 So this is the study of these constraint systems

01:26:53 that are supposed to deal with physical objects.

01:26:55 So if you look at cognitive psychology,

01:27:01 you can see there’s multiple types of memory,

01:27:04 which are to some extent represented

01:27:06 by different subsystems in the human brain.

01:27:08 So we have episodic memory,

01:27:10 which takes into account our life history

01:27:13 and everything that’s happened to us.

01:27:15 We have declarative or semantic memory,

01:27:17 which is like facts and beliefs abstracted

01:27:20 from the particular situations that they occurred in.

01:27:22 There’s sensory memory, which to some extent

01:27:26 is sense modality specific,

01:27:27 and then to some extent is unified across sense modalities.

01:27:33 There’s procedural memory, memory of how to do stuff,

01:27:36 like how to swing the tennis racket, right?

01:27:38 Which is, there’s motor memory,

01:27:39 but it’s also a little more abstract than motor memory.

01:27:43 It involves cerebellum and cortex working together.

01:27:47 Then there’s memory linkage with emotion

01:27:51 which has to do with linkages of cortex and limbic system.

01:27:55 There’s specifics of spatial and temporal modeling

01:27:59 connected with memory, which has to do with hippocampus

01:28:02 and thalamus connecting to cortex.

01:28:05 And the basal ganglia, which influences goals.

01:28:08 So we have specific memory of what goals,

01:28:10 subgoals and sub subgoals we want to perceive

01:28:13 in which context in the past.

01:28:15 Human brain has substantially different subsystems

01:28:18 for these different types of memory

01:28:21 and substantially differently tuned learning,

01:28:24 like differently tuned modes of longterm potentiation

01:28:27 to do with the types of neurons and neurotransmitters

01:28:29 in the different parts of the brain

01:28:31 corresponding to these different types of knowledge.

01:28:33 And these different types of memory and learning

01:28:35 in the human brain, I mean, you can back these all

01:28:38 into embodied communication for controlling agents

01:28:41 in worlds of solid objects.

01:28:44 Now, so if you look at building an AGI system,

01:28:47 one way to do it, which starts more from cognitive science

01:28:50 than neuroscience is to say,

01:28:52 okay, what are the types of memory

01:28:55 that are necessary for this kind of world?

01:28:57 Yeah, yeah, necessary for this sort of intelligence.

01:29:00 What types of learning work well

01:29:02 with these different types of memory?

01:29:04 And then how do you connect all these things together, right?

01:29:07 And of course the human brain did it incrementally

01:29:10 through evolution because each of the sub networks

01:29:14 of the brain, I mean, it’s not really the lobes

01:29:16 of the brain, it’s the sub networks,

01:29:18 each of which is widely distributed,

01:29:20 which of the, each of the sub networks of the brain

01:29:23 co evolves with the other sub networks of the brain,

01:29:27 both in terms of its patterns of organization

01:29:29 and the particulars of the neurophysiology.

01:29:31 So they all grew up communicating

01:29:33 and adapting to each other.

01:29:34 It’s not like they were separate black boxes

01:29:36 that were then glommed together, right?

01:29:40 Whereas as engineers, we would tend to say,

01:29:43 let’s make the declarative memory box here

01:29:46 and the procedural memory box here

01:29:48 and the perception box here and wire them together.

01:29:51 And when you can do that, it’s interesting.

01:29:54 I mean, that’s how a car is built, right?

01:29:55 But on the other hand, that’s clearly not

01:29:58 how biological systems are made.

01:30:01 The parts co evolve so as to adapt and work together.

01:30:05 That’s by the way, how every human engineered system

01:30:09 that flies, that was, we were using that analogy

01:30:11 before it’s built as well.

01:30:13 So do you find this at all appealing?

01:30:14 Like there’s been a lot of really exciting,

01:30:16 which I find strange that it’s ignored work

01:30:20 in cognitive architectures, for example,

01:30:21 throughout the last few decades.

01:30:23 Do you find that?

01:30:24 Yeah, I mean, I had a lot to do with that community

01:30:27 and you know, Paul Rosenbloom, who was one of the,

01:30:31 and John Laird who built the SOAR architecture,

01:30:33 are friends of mine.

01:30:34 And I learned SOAR quite well

01:30:37 and ACTAR and these different cognitive architectures.

01:30:39 And how I was looking at the AI world about 10 years ago

01:30:44 before this whole commercial deep learning explosion was,

01:30:47 on the one hand, you had these cognitive architecture guys

01:30:51 who were working closely with psychologists

01:30:53 and cognitive scientists who had thought a lot

01:30:55 about how the different parts of a human like mind

01:30:58 should work together.

01:31:00 On the other hand, you had these learning theory guys

01:31:03 who didn’t care at all about the architecture,

01:31:06 but we’re just thinking about like,

01:31:07 how do you recognize patterns in large amounts of data?

01:31:10 And in some sense, what you needed to do

01:31:14 was to get the learning that the learning theory guys

01:31:18 were doing and put it together with the architecture

01:31:21 that the cognitive architecture guys were doing.

01:31:24 And then you would have what you needed.

01:31:25 Now, you can’t, unfortunately, when you look at the details,

01:31:31 you can’t just do that without totally rebuilding

01:31:34 what is happening on both the cognitive architecture

01:31:37 and the learning side.

01:31:38 So, I mean, they tried to do that in SOAR,

01:31:41 but what they ultimately did is like,

01:31:43 take a deep neural net or something for perception

01:31:46 and you include it as one of the black boxes.

01:31:50 It becomes one of the boxes.

01:31:51 The learning mechanism becomes one of the boxes

01:31:53 as opposed to fundamental part of the system.

01:31:57 You could look at some of the stuff DeepMind has done,

01:32:00 like the differential neural computer or something

01:32:03 that sort of has a neural net for deep learning perception.

01:32:07 It has another neural net, which is like a memory matrix

01:32:10 that stores, say, the map of the London subway or something.

01:32:13 So probably Demis Tsabas was thinking about this

01:32:16 like part of cortex and part of hippocampus

01:32:18 because hippocampus has a spatial map.

01:32:20 And when he was a neuroscientist,

01:32:21 he was doing a bunch on cortex hippocampus interconnection.

01:32:24 So there, the DNC would be an example of folks

01:32:27 from the deep neural net world trying to take a step

01:32:30 in the cognitive architecture direction

01:32:32 by having two neural modules that correspond roughly

01:32:35 to two different parts of the human brain

01:32:36 that deal with different kinds of memory and learning.

01:32:38 But on the other hand, it’s super, super, super crude

01:32:42 from the cognitive architecture view, right?

01:32:44 Just as what John Laird and Soar did with neural nets

01:32:48 was super, super crude from a learning point of view

01:32:51 because the learning was like off to the side,

01:32:53 not affecting the core representations, right?

01:32:55 I mean, you weren’t learning the representation.

01:32:57 You were learning the data that feeds into the…

01:33:00 You were learning abstractions of perceptual data

01:33:02 to feed into the representation that was not learned, right?

01:33:06 So yeah, this was clear to me a while ago.

01:33:11 And one of my hopes with the AGI community

01:33:14 was to sort of bring people

01:33:15 from those two directions together.

01:33:19 That didn’t happen much in terms of…

01:33:21 Not yet.

01:33:22 And what I was gonna say is it didn’t happen

01:33:24 in terms of bringing like the lions

01:33:26 of cognitive architecture together

01:33:28 with the lions of deep learning.

01:33:30 It did work in the sense that a bunch of younger researchers

01:33:33 have had their heads filled with both of those ideas.

01:33:35 This comes back to a saying my dad,

01:33:38 who was a university professor, often quoted to me,

01:33:41 which was, science advances one funeral at a time,

01:33:45 which I’m trying to avoid.

01:33:47 Like I’m 53 years old and I’m trying to invent

01:33:51 amazing, weird ass new things

01:33:53 that nobody ever thought about,

01:33:56 which we’ll talk about in a few minutes.

01:33:59 But there is that aspect, right?

01:34:02 Like the people who’ve been at AI a long time

01:34:05 and have made their career developing one aspect,

01:34:08 like a cognitive architecture or a deep learning approach,

01:34:12 it can be hard once you’re old

01:34:14 and have made your career doing one thing,

01:34:17 it can be hard to mentally shift gears.

01:34:19 I mean, I try quite hard to remain flexible minded.

01:34:23 Have you been successful somewhat in changing,

01:34:26 maybe, have you changed your mind on some aspects

01:34:29 of what it takes to build an AGI, like technical things?

01:34:32 The hard part is that the world doesn’t want you to.

01:34:36 The world or your own brain?

01:34:37 The world, well, that one point

01:34:39 is that your brain doesn’t want to.

01:34:41 The other part is that the world doesn’t want you to.

01:34:43 Like the people who have followed your ideas

01:34:46 get mad at you if you change your mind.

01:34:49 And the media wants to pigeonhole you as an avatar

01:34:54 of a certain idea.

01:34:57 But yeah, I’ve changed my mind on a bunch of things.

01:35:01 I mean, when I started my career,

01:35:03 I really thought quantum computing

01:35:05 would be necessary for AGI.

01:35:07 And I doubt it’s necessary now,

01:35:10 although I think it will be a super major enhancement.

01:35:14 But I mean, I’m now in the middle of embarking

01:35:19 on the complete rethink and rewrite from scratch

01:35:23 of our OpenCog AGI system together with Alexey Potapov

01:35:28 and his team in St. Petersburg,

01:35:29 who’s working with me in SingularityNet.

01:35:31 So now we’re trying to like go back to basics,

01:35:35 take everything we learned from working

01:35:37 with the current OpenCog system,

01:35:39 take everything everybody else has learned

01:35:41 from working with their proto AGI systems

01:35:45 and design the best framework for the next stage.

01:35:50 And I do think there’s a lot to be learned

01:35:53 from the recent successes with deep neural nets

01:35:56 and deep reinforcement systems.

01:35:59 I mean, people made these essentially trivial systems

01:36:02 work much better than I thought they would.

01:36:04 And there’s a lot to be learned from that.

01:36:07 And I wanna incorporate that knowledge appropriately

01:36:10 in our OpenCog 2.0 system.

01:36:13 On the other hand, I also think current deep neural net

01:36:18 architectures as such will never get you anywhere near AGI.

01:36:22 So I think you wanna avoid the pathology

01:36:25 of throwing the baby out with the bathwater

01:36:28 and like saying, well, these things are garbage

01:36:30 because foolish journalists overblow them

01:36:33 as being the path to AGI

01:36:37 and a few researchers overblow them as well.

01:36:41 There’s a lot of interesting stuff to be learned there

01:36:45 even though those are not the golden path.

01:36:48 So maybe this is a good chance to step back.

01:36:50 You mentioned OpenCog 2.0, but…

01:36:52 Go back to OpenCog 0.0, which exists now.

01:36:56 Alpha, yeah.

01:36:58 Yeah, maybe talk through the history of OpenCog

01:37:01 and your thinking about these ideas.

01:37:03 I would say OpenCog 2.0 is a term we’re throwing around

01:37:10 sort of tongue in cheek because the existing OpenCog system

01:37:14 that we’re working on now is not remotely close

01:37:17 to what we’d consider a 1.0, right?

01:37:20 I mean, it’s an early…

01:37:23 It’s been around, what, 13 years or something,

01:37:27 but it’s still an early stage research system, right?

01:37:29 And actually, we are going back to the beginning

01:37:37 in terms of theory and implementation

01:37:40 because we feel like that’s the right thing to do,

01:37:42 but I’m sure what we end up with is gonna have

01:37:45 a huge amount in common with the current system.

01:37:48 I mean, we all still like the general approach.

01:37:51 So first of all, what is OpenCog?

01:37:54 Sure, OpenCog is an open source software project

01:37:59 that I launched together with several others in 2008

01:38:04 and probably the first code written toward that

01:38:08 was written in 2001 or two or something

01:38:11 that was developed as a proprietary code base

01:38:15 within my AI company, Novamente LLC.

01:38:18 Then we decided to open source it in 2008,

01:38:22 cleaned up the code throughout some things

01:38:23 and added some new things and…

01:38:26 What language is it written in?

01:38:28 It’s C++.

01:38:29 Primarily, there’s a bunch of scheme as well,

01:38:31 but most of it’s C++.

01:38:33 And it’s separate from something we’ll also talk about,

01:38:36 the SingularityNet.

01:38:37 So it was born as a non networked thing.

01:38:41 Correct, correct.

01:38:42 Well, there are many levels of networks involved here.

01:38:47 No connectivity to the internet, or no, at birth.

01:38:52 Yeah, I mean, SingularityNet is a separate project

01:38:57 and a separate body of code.

01:38:59 And you can use SingularityNet as part of the infrastructure

01:39:02 for a distributed OpenCog system,

01:39:04 but there are different layers.

01:39:07 Yeah, got it.

01:39:08 So OpenCog on the one hand as a software framework

01:39:14 could be used to implement a variety

01:39:17 of different AI architectures and algorithms,

01:39:21 but in practice, there’s been a group of developers

01:39:26 which I’ve been leading together with Linus Vepstas,

01:39:29 Neil Geisweiler, and a few others,

01:39:31 which have been using the OpenCog platform

01:39:35 and infrastructure to implement certain ideas

01:39:39 about how to make an AGI.

01:39:41 So there’s been a little bit of ambiguity

01:39:43 about OpenCog, the software platform

01:39:46 versus OpenCog, the AGI design,

01:39:49 because in theory, you could use that software to do,

01:39:52 you could use it to make a neural net.

01:39:53 You could use it to make a lot of different AGI.

01:39:55 What kind of stuff does the software platform provide,

01:39:58 like in terms of utilities, tools, like what?

01:40:00 Yeah, let me first tell about OpenCog

01:40:03 as a software platform,

01:40:05 and then I’ll tell you the specific AGI R&D

01:40:08 we’ve been building on top of it.

01:40:12 So the core component of OpenCog as a software platform

01:40:16 is what we call the atom space,

01:40:17 which is a weighted labeled hypergraph.

01:40:21 ATOM, atom space.

01:40:22 Atom space, yeah, yeah, not atom, like Adam and Eve,

01:40:25 although that would be cool too.

01:40:28 Yeah, so you have a hypergraph, which is like,

01:40:32 so a graph in this sense is a bunch of nodes

01:40:35 with links between them.

01:40:37 A hypergraph is like a graph,

01:40:40 but links can go between more than two nodes.

01:40:43 So you have a link between three nodes.

01:40:45 And in fact, OpenCog’s atom space

01:40:49 would properly be called a metagraph

01:40:51 because you can have links pointing to links,

01:40:54 or you could have links pointing to whole subgraphs, right?

01:40:56 So it’s an extended hypergraph or a metagraph.

01:41:00 Is metagraph a technical term?

01:41:02 It is now a technical term.

01:41:03 Interesting.

01:41:04 But I don’t think it was yet a technical term

01:41:06 when we started calling this a generalized hypergraph.

01:41:10 But in any case, it’s a weighted labeled

01:41:13 generalized hypergraph or weighted labeled metagraph.

01:41:16 The weights and labels mean that the nodes and links

01:41:19 can have numbers and symbols attached to them.

01:41:22 So they can have types on them.

01:41:24 They can have numbers on them that represent,

01:41:27 say, a truth value or an importance value

01:41:30 for a certain purpose.

01:41:32 And of course, like with all things,

01:41:33 you can reduce that to a hypergraph,

01:41:35 and then the hypergraph can be reduced to a graph.

01:41:35 You can reduce hypergraph to a graph,

01:41:37 and you could reduce a graph to an adjacency matrix.

01:41:39 So, I mean, there’s always multiple representations.

01:41:42 But there’s a layer of representation

01:41:44 that seems to work well here.

01:41:45 Got it.

01:41:45 Right, right, right.

01:41:46 And so similarly, you could have a link to a whole graph

01:41:52 because a whole graph could represent,

01:41:53 say, a body of information.

01:41:54 And I could say, I reject this body of information.

01:41:58 Then one way to do that is make that link

01:42:00 go to that whole subgraph representing

01:42:02 the body of information, right?

01:42:04 I mean, there are many alternate representations,

01:42:07 but that’s, anyway, what we have in OpenCOG,

01:42:10 we have an atom space, which is this weighted, labeled,

01:42:13 generalized hypergraph.

01:42:15 Knowledge store, it lives in RAM.

01:42:17 There’s also a way to back it up to disk.

01:42:20 There are ways to spread it among

01:42:22 multiple different machines.

01:42:24 Then there are various utilities for dealing with that.

01:42:27 So there’s a pattern matcher,

01:42:29 which lets you specify a sort of abstract pattern

01:42:33 and then search through a whole atom space

01:42:36 with labeled hypergraph to see what subhypergraphs

01:42:39 may match that pattern, for an example.

01:42:42 So that’s, then there’s something called

01:42:45 the COG server in OpenCOG,

01:42:48 which lets you run a bunch of different agents

01:42:52 or processes in a scheduler.

01:42:55 And each of these agents, basically it reads stuff

01:42:59 from the atom space and it writes stuff to the atom space.

01:43:01 So this is sort of the basic operational model.

01:43:05 That’s the software framework.

01:43:07 And of course that’s, there’s a lot there

01:43:10 just from a scalable software engineering standpoint.

01:43:13 So you could use this, I don’t know if you’ve,

01:43:15 have you looked into the Stephen Wolfram’s physics project

01:43:18 recently with the hypergraphs and stuff?

01:43:20 Could you theoretically use like the software framework

01:43:22 to play with it? You certainly could,

01:43:23 although Wolfram would rather die

01:43:26 than use anything but Mathematica for his work.

01:43:29 Well that’s, yeah, but there’s a big community of people

01:43:32 who are, you know, would love integration.

01:43:36 Like you said, the young minds love the idea

01:43:38 of integrating, of connecting things.

01:43:40 Yeah, that’s right.

01:43:41 And I would add on that note,

01:43:42 the idea of using hypergraph type models in physics

01:43:46 is not very new.

01:43:47 Like if you look at…

01:43:49 The Russians did it first.

01:43:50 Well, I’m sure they did.

01:43:52 And a guy named Ben Dribis, who’s a mathematician,

01:43:55 a professor in Louisiana or somewhere,

01:43:58 had a beautiful book on quantum sets and hypergraphs

01:44:01 and algebraic topology for discrete models of physics.

01:44:05 And carried it much farther than Wolfram has,

01:44:09 but he’s not rich and famous,

01:44:10 so it didn’t get in the headlines.

01:44:13 But yeah, Wolfram aside, yeah,

01:44:15 certainly that’s a good way to put it.

01:44:17 The whole OpenCog framework,

01:44:19 you could use it to model biological networks

01:44:22 and simulate biology processes.

01:44:24 You could use it to model physics

01:44:26 on discrete graph models of physics.

01:44:30 So you could use it to do, say, biologically realistic

01:44:36 neural networks, for example.

01:44:39 And that’s a framework.

01:44:42 What do agents and processes do?

01:44:44 Do they grow the graph?

01:44:45 What kind of computations, just to get a sense,

01:44:48 are they supposed to do?

01:44:49 So in theory, they could do anything they want to do.

01:44:51 They’re just C++ processes.

01:44:53 On the other hand, the computation framework

01:44:56 is sort of designed for agents

01:44:59 where most of their processing time

01:45:02 is taken up with reads and writes to the atom space.

01:45:05 And so that’s a very different processing model

01:45:09 than, say, the matrix multiplication based model

01:45:12 as underlies most deep learning systems, right?

01:45:15 So you could create an agent

01:45:19 that just factored numbers for a billion years.

01:45:22 It would run within the OpenCog platform,

01:45:25 but it would be pointless, right?

01:45:26 I mean, the point of doing OpenCog

01:45:28 is because you want to make agents

01:45:30 that are cooperating via reading and writing

01:45:33 into this weighted labeled hypergraph, right?

01:45:36 And that has both cognitive architecture importance

01:45:41 because then this hypergraph is being used

01:45:43 as a sort of shared memory

01:45:46 among different cognitive processes,

01:45:48 but it also has software and hardware

01:45:51 implementation implications

01:45:52 because current GPU architectures

01:45:54 are not so useful for OpenCog,

01:45:57 whereas a graph chip would be incredibly useful, right?

01:46:01 And I think Graphcore has those now,

01:46:03 but they’re not ideally suited for this.

01:46:05 But I think in the next, let’s say, three to five years,

01:46:10 we’re gonna see new chips

01:46:12 where like a graph is put on the chip

01:46:14 and the back and forth between multiple processes

01:46:19 acting SIMD and MIMD on that graph is gonna be fast.

01:46:23 And then that may do for OpenCog type architectures

01:46:26 what GPUs did for deep neural architecture.

01:46:29 It’s a small tangent.

01:46:31 Can you comment on thoughts about neuromorphic computing?

01:46:34 So like hardware implementations

01:46:36 of all these different kind of, are you interested?

01:46:39 Are you excited by that possibility?

01:46:41 I’m excited by graph processors

01:46:42 because I think they can massively speed up OpenCog,

01:46:46 which is a class of architectures that I’m working on.

01:46:50 I think if, you know, in principle, neuromorphic computing

01:46:57 should be amazing.

01:46:58 I haven’t yet been fully sold

01:47:00 on any of the systems that are out.

01:47:03 They’re like, memristors should be amazing too, right?

01:47:06 So a lot of these things have obvious potential,

01:47:09 but I haven’t yet put my hands on a system

01:47:11 that seemed to manifest that.

01:47:13 Mark’s system should be amazing,

01:47:14 but the current systems have not been great.

01:47:17 Yeah, I mean, look, for example,

01:47:19 if you wanted to make a biologically realistic

01:47:23 hardware neural network,

01:47:25 like making a circuit in hardware

01:47:31 that emulated like the Hodgkin–Huxley equation

01:47:34 or the Izhekevich equation,

01:47:35 like differential equations

01:47:38 for a biologically realistic neuron

01:47:40 and putting that in hardware on the chip,

01:47:43 that would seem that it would make more feasible

01:47:46 to make a large scale, truly biologically realistic

01:47:50 neural network.

01:47:51 Now, what’s been done so far is not like that.

01:47:54 So I guess personally, as a researcher,

01:47:57 I mean, I’ve done a bunch of work in computational neuroscience

01:48:02 where I did some work with IARPA in DC,

01:48:05 Intelligence Advanced Research Project Agency.

01:48:08 We were looking at how do you make

01:48:10 a biologically realistic simulation

01:48:13 of seven different parts of the brain

01:48:15 cooperating with each other,

01:48:17 using like realistic nonlinear dynamical models of neurons,

01:48:20 and how do you get that to simulate

01:48:21 what’s going on in the mind of a geo intelligence analyst

01:48:24 while they’re trying to find terrorists on a map, right?

01:48:27 So if you want to do something like that,

01:48:29 having neuromorphic hardware that really let you simulate

01:48:34 like a realistic model of the neuron would be amazing.

01:48:38 But that’s sort of with my computational neuroscience

01:48:42 hat on, right?

01:48:43 With an AGI hat on, I’m just more interested

01:48:47 in these hypergraph knowledge representation

01:48:50 based architectures, which would benefit more

01:48:54 from various types of graph processors

01:48:57 because the main processing bottleneck

01:49:00 is reading writing to RAM.

01:49:02 It’s reading writing to the graph in RAM.

01:49:03 The main processing bottleneck for this kind of

01:49:06 proto AGI architecture is not multiplying matrices.

01:49:09 And for that reason, GPUs, which are really good

01:49:13 at multiplying matrices, don’t apply as well.

01:49:17 There are frameworks like Gunrock and others

01:49:20 that try to boil down graph processing

01:49:22 to matrix operations, and they’re cool,

01:49:24 but you’re still putting a square peg

01:49:26 into a round hole in a certain way.

01:49:28 The same is true, I mean, current quantum machine learning,

01:49:32 which is very cool.

01:49:34 It’s also all about how to get matrix and vector operations

01:49:37 in quantum mechanics, and I see why that’s natural to do.

01:49:41 I mean, quantum mechanics is all unitary matrices

01:49:44 and vectors, right?

01:49:45 On the other hand, you could also try

01:49:48 to make graph centric quantum computers,

01:49:50 which I think is where things will go.

01:49:54 And then we can have, then we can make,

01:49:57 like take the open cog implementation layer,

01:50:00 implement it in a collapsed state inside a quantum computer.

01:50:04 But that may be the singularity squared, right?

01:50:06 I’m not sure we need that to get to human level.

01:50:12 That’s already beyond the first singularity.

01:50:14 But can we just go back to open cog?

01:50:17 Yeah, and the hypergraph and open cog.

01:50:20 That’s the software framework, right?

01:50:21 So the next thing is our cognitive architecture

01:50:25 tells us particular algorithms to put there.

01:50:27 Got it.

01:50:28 Can we backtrack on the kind of, is this graph designed,

01:50:33 is it in general supposed to be sparse

01:50:37 and the operations constantly grow and change the graph?

01:50:40 Yeah, the graph is sparse.

01:50:42 But is it constantly adding links and so on?

01:50:45 It is a self modifying hypergraph.

01:50:47 So it’s not, so the write and read operations

01:50:49 you’re referring to, this isn’t just a fixed graph

01:50:53 to which you change the way, it’s a constantly growing graph.

01:50:55 Yeah, that’s true.

01:50:58 So it is different model than,

01:51:03 say current deep neural nets

01:51:04 and have a fixed neural architecture

01:51:06 and you’re updating the weights.

01:51:08 Although there have been like cascade correlational

01:51:10 neural net architectures that grow new nodes and links,

01:51:13 but the most common neural architectures now

01:51:16 have a fixed neural architecture,

01:51:17 you’re updating the weights.

01:51:19 And then open cog, you can update the weights

01:51:22 and that certainly happens a lot,

01:51:24 but adding new nodes, adding new links,

01:51:28 removing nodes and links is an equally critical part

01:51:30 of the system’s operations.

01:51:32 Got it.

01:51:33 So now when you start to add these cognitive algorithms

01:51:37 on top of this open cog architecture,

01:51:39 what does that look like?

01:51:41 Yeah, so within this framework then,

01:51:44 creating a cognitive architecture is basically two things.

01:51:48 It’s choosing what type system you wanna put

01:51:52 on the nodes and links in the hypergraph,

01:51:53 what types of nodes and links you want.

01:51:56 And then it’s choosing what collection of agents,

01:52:01 what collection of AI algorithms or processes

01:52:04 are gonna run to operate on this hypergraph.

01:52:08 And of course those two decisions

01:52:10 are closely connected to each other.

01:52:13 So in terms of the type system,

01:52:17 there are some links that are more neural net like,

01:52:19 they’re just like have weights to get updated

01:52:22 by heavy and learning and activation spreads along them.

01:52:26 There are other links that are more logic like

01:52:29 and nodes that are more logic like.

01:52:30 So you could have a variable node

01:52:32 and you can have a node representing a universal

01:52:34 or existential quantifier as in predicate logic

01:52:37 or term logic.

01:52:39 So you can have logic like nodes and links,

01:52:42 or you can have neural like nodes and links.

01:52:44 You can also have procedure like nodes and links

01:52:47 as in say a combinatorial logic or Lambda calculus

01:52:51 representing programs.

01:52:53 So you can have nodes and links representing

01:52:56 many different types of semantics,

01:52:58 which means you could make a horrible ugly mess

01:53:00 or you could make a system

01:53:02 where these different types of knowledge

01:53:04 all interpenetrate and synergize

01:53:06 with each other beautifully, right?

01:53:08 So the hypergraph can contain programs.

01:53:12 Yeah, it can contain programs,

01:53:14 although in the current version,

01:53:17 it is a very inefficient way

01:53:19 to guide the execution of programs,

01:53:21 which is one thing that we are aiming to resolve

01:53:25 with our rewrite of the system now.

01:53:27 So what to you is the most beautiful aspect of OpenCog?

01:53:32 Just to you personally,

01:53:34 some aspect that captivates your imagination

01:53:38 from beauty or power?

01:53:42 What fascinates me is finding a common representation

01:53:48 that underlies abstract, declarative knowledge

01:53:53 and sensory knowledge and movement knowledge

01:53:57 and procedural knowledge and episodic knowledge,

01:54:00 finding the right level of representation

01:54:03 where all these types of knowledge are stored

01:54:06 in a sort of universal and interconvertible

01:54:10 yet practically manipulable way, right?

01:54:13 So to me, that’s the core,

01:54:16 because once you’ve done that,

01:54:18 then the different learning algorithms

01:54:20 can help each other out. Like what you want is,

01:54:23 if you have a logic engine

01:54:25 that helps with declarative knowledge

01:54:26 and you have a deep neural net

01:54:28 that gathers perceptual knowledge,

01:54:29 and you have, say, an evolutionary learning system

01:54:32 that learns procedures,

01:54:34 you want these to not only interact

01:54:36 on the level of sharing results

01:54:38 and passing inputs and outputs to each other,

01:54:41 you want the logic engine, when it gets stuck,

01:54:43 to be able to share its intermediate state

01:54:46 with the neural net and with the evolutionary system

01:54:49 and with the evolutionary learning algorithm

01:54:52 so that they can help each other out of bottlenecks

01:54:55 and help each other solve combinatorial explosions

01:54:58 by intervening inside each other’s cognitive processes.

01:55:02 But that can only be done

01:55:03 if the intermediate state of a logic engine,

01:55:05 the evolutionary learning engine,

01:55:07 and a deep neural net are represented in the same form.

01:55:11 And that’s what we figured out how to do

01:55:13 by putting the right type system

01:55:14 on top of this weighted labeled hypergraph.

01:55:17 So is there, can you maybe elaborate

01:55:19 on what are the different characteristics

01:55:21 of a type system that can coexist

01:55:26 amongst all these different kinds of knowledge

01:55:28 that needs to be represented?

01:55:30 And is, I mean, like, is it hierarchical?

01:55:34 Just any kind of insights you can give

01:55:36 on that kind of type system?

01:55:37 Yeah, yeah, so this gets very nitty gritty

01:55:41 and mathematical, of course,

01:55:44 but one key part is switching

01:55:47 from predicate logic to term logic.

01:55:50 What is predicate logic?

01:55:51 What is term logic?

01:55:53 So term logic was invented by Aristotle,

01:55:56 or at least that’s the oldest recollection we have of it.

01:56:01 But term logic breaks down basic logic

01:56:05 into basically simple links between nodes,

01:56:07 like an inheritance link between node A and node B.

01:56:12 So in term logic, the basic deduction operation

01:56:16 is A implies B, B implies C, therefore A implies C.

01:56:21 Whereas in predicate logic,

01:56:22 the basic operation is modus ponens,

01:56:24 like A implies B, therefore B.

01:56:27 So it’s a slightly different way of breaking down logic,

01:56:31 but by breaking down logic into term logic,

01:56:35 you get a nice way of breaking logic down

01:56:37 into nodes and links.

01:56:40 So your concepts can become nodes,

01:56:42 the logical relations become links.

01:56:45 And so then inference is like,

01:56:46 so if this link is A implies B,

01:56:48 this link is B implies C,

01:56:50 then deduction builds a link A implies C.

01:56:53 And your probabilistic algorithm

01:56:54 can assign a certain weight there.

01:56:57 Now, you may also have like a Hebbian neural link

01:57:00 from A to C, which is the degree to which thinking,

01:57:03 the degree to which A being the focus of attention

01:57:06 should make B the focus of attention, right?

01:57:09 So you could have then a neural link

01:57:10 and you could have a symbolic,

01:57:13 like logical inheritance link in your term logic.

01:57:17 And they have separate meaning,

01:57:19 but they could be used to guide each other as well.

01:57:22 Like if there’s a large amount of neural weight

01:57:26 on the link between A and B,

01:57:28 that may direct your logic engine to think about,

01:57:30 well, what is the relation?

01:57:31 Are they similar?

01:57:32 Is there an inheritance relation?

01:57:33 Are they similar in some context?

01:57:37 On the other hand, if there’s a logical relation

01:57:39 between A and B, that may direct your neural component

01:57:43 to think, well, when I’m thinking about A,

01:57:45 should I be directing some attention to B also?

01:57:48 Because there’s a logical relation.

01:57:50 So in terms of logic,

01:57:53 there’s a lot of thought that went into

01:57:54 how do you break down logic relations,

01:57:58 including basic sort of propositional logic relations

01:58:02 as Aristotelian term logic deals with,

01:58:04 and then quantifier logic relations also.

01:58:07 How do you break those down elegantly into a hypergraph?

01:58:10 Because you, I mean, you can boil logic expression

01:58:13 into a graph in many different ways.

01:58:14 Many of them are very ugly, right?

01:58:16 We tried to find elegant ways

01:58:19 of sort of hierarchically breaking down

01:58:22 complex logic expression into nodes and links.

01:58:26 So that if you have say different nodes representing,

01:58:31 Ben, AI, Lex, interview or whatever,

01:58:34 the logic relations between those things

01:58:36 are compact in the node and link representation.

01:58:40 So that when you have a neural net acting

01:58:42 on the same nodes and links,

01:58:43 the neural net and the logic engine

01:58:45 can sort of interoperate with each other.

01:58:48 And also interpretable by humans.

01:58:49 Is that an important?

01:58:51 That’s tough.

01:58:52 Yeah, in simple cases, it’s interpretable by humans.

01:58:54 But honestly, I would say logic systems

01:58:59 I would say logic systems give more potential

01:59:05 for transparency and comprehensibility

01:59:09 than neural net systems,

01:59:11 but you still have to work at it.

01:59:12 Because I mean, if I show you a predicate logic proposition

01:59:16 with like 500 nested universal and existential quantifiers

01:59:20 and 217 variables, that’s no more comprehensible

01:59:23 than the weight metrics of a neural network, right?

01:59:26 So I’d say the logic expressions

01:59:28 that AI learns from its experience

01:59:30 are mostly totally opaque to human beings

01:59:33 and maybe even harder to understand than neural net.

01:59:36 Because I mean, when you have multiple

01:59:37 nested quantifier bindings,

01:59:38 it’s a very high level of abstraction.

01:59:41 There is a difference though,

01:59:42 in that within logic, it’s a little more straightforward

01:59:46 to pose the problem of like normalize this

01:59:49 and boil this down to a certain form.

01:59:51 I mean, you can do that in neural nets too.

01:59:52 Like you can distill a neural net to a simpler form,

01:59:55 but that’s more often done to make a neural net

01:59:57 that’ll run on an embedded device or something.

01:59:59 It’s harder to distill a net to a comprehensible form

02:00:03 than it is to simplify a logic expression

02:00:05 to a comprehensible form, but it doesn’t come for free.

02:00:08 Like what’s in the AI’s mind is incomprehensible

02:00:13 to a human unless you do some special work

02:00:15 to make it comprehensible.

02:00:16 So on the procedural side, there’s some different

02:00:20 and sort of interesting voodoo there.

02:00:23 I mean, if you’re familiar in computer science,

02:00:25 there’s something called the Curry Howard correspondence,

02:00:27 which is a one to one mapping between proofs and programs.

02:00:30 So every program can be mapped into a proof.

02:00:33 Every proof can be mapped into a program.

02:00:35 You can model this using category theory

02:00:37 and a bunch of nice math,

02:00:40 but we wanna make that practical, right?

02:00:43 So that if you have an executable program

02:00:46 that like moves the robot’s arm or figures out

02:00:49 in what order to say things in a dialogue,

02:00:51 that’s a procedure represented in OpenCog’s hypergraph.

02:00:55 But if you wanna reason on how to improve that procedure,

02:01:00 you need to map that procedure into logic

02:01:03 using Curry Howard isomorphism.

02:01:05 So then the logic engine can reason

02:01:09 about how to improve that procedure

02:01:11 and then map that back into the procedural representation

02:01:14 that is efficient for execution.

02:01:16 So again, that comes down to not just

02:01:18 can you make your procedure into a bunch of nodes and links?

02:01:21 Cause I mean, that can be done trivially.

02:01:23 A C++ compiler has nodes and links inside it.

02:01:26 Can you boil down your procedure

02:01:27 into a bunch of nodes and links

02:01:29 in a way that’s like hierarchically decomposed

02:01:32 and simple enough?

02:01:33 It can reason about.

02:01:34 Yeah, yeah, that given the resource constraints at hand,

02:01:37 you can map it back and forth to your term logic,

02:01:40 like fast enough

02:01:42 and without having a bloated logic expression, right?

02:01:45 So there’s just a lot of,

02:01:48 there’s a lot of nitty gritty particulars there,

02:01:50 but by the same token, if you ask a chip designer,

02:01:54 like how do you make the Intel I7 chip so good?

02:01:58 There’s a long list of technical answers there,

02:02:02 which will take a while to go through, right?

02:02:04 And this has been decades of work.

02:02:06 I mean, the first AI system of this nature I tried to build

02:02:10 was called WebMind in the mid 1990s.

02:02:13 And we had a big graph,

02:02:15 a big graph operating in RAM implemented with Java 1.1,

02:02:18 which was a terrible, terrible implementation idea.

02:02:21 And then each node had its own processing.

02:02:25 So like that there,

02:02:27 the core loop looped through all nodes in the network

02:02:29 and let each node enact what its little thing was doing.

02:02:32 And we had logic and neural nets in there,

02:02:35 but an evolutionary learning,

02:02:38 but we hadn’t done enough of the math

02:02:40 to get them to operate together very cleanly.

02:02:43 So it was really, it was quite a horrible mess.

02:02:46 So as well as shifting an implementation

02:02:49 where the graph is its own object

02:02:51 and the agents are separately scheduled,

02:02:54 we’ve also done a lot of work

02:02:56 on how do you represent programs?

02:02:58 How do you represent procedures?

02:03:00 You know, how do you represent genotypes for evolution

02:03:03 in a way that the interoperability

02:03:06 between the different types of learning

02:03:09 associated with these different types of knowledge

02:03:11 actually works?

02:03:13 And that’s been quite difficult.

02:03:14 It’s taken decades and it’s totally off to the side

02:03:18 of what the commercial mainstream of the AI field is doing,

02:03:23 which isn’t thinking about representation at all really.

02:03:27 Although you could see like in the DNC,

02:03:30 they had to think a little bit about

02:03:32 how do you make representation of a map

02:03:33 in this memory matrix work together

02:03:36 with the representation needed

02:03:38 for say visual pattern recognition

02:03:40 in the hierarchical neural network.

02:03:42 But I would say we have taken that direction

02:03:45 of taking the types of knowledge you need

02:03:47 for different types of learning,

02:03:49 like declarative, procedural, attentional,

02:03:52 and how do you make these types of knowledge represent

02:03:55 in a way that allows cross learning

02:03:58 across these different types of memory.

02:04:00 We’ve been prototyping and experimenting with this

02:04:03 within OpenCog and before that WebMind

02:04:07 since the mid 1990s.

02:04:10 Now, disappointingly to all of us,

02:04:13 this has not yet been cashed out in an AGI system, right?

02:04:18 I mean, we’ve used this system

02:04:20 within our consulting business.

02:04:22 So we’ve built natural language processing

02:04:24 and robot control and financial analysis.

02:04:27 We’ve built a bunch of sort of vertical market specific

02:04:31 proprietary AI projects.

02:04:33 They use OpenCog on the backend,

02:04:36 but we haven’t, that’s not the AGI goal, right?

02:04:39 It’s interesting, but it’s not the AGI goal.

02:04:42 So now what we’re looking at with our rebuild of the system.

02:04:48 2.0.

02:04:49 Yeah, we’re also calling it True AGI.

02:04:51 So we’re not quite sure what the name is yet.

02:04:54 We made a website for trueagi.io,

02:04:57 but we haven’t put anything on there yet.

02:04:59 We may come up with an even better name.

02:05:02 It’s kind of like the real AI starting point

02:05:04 for your AGI book.

02:05:05 Yeah, but I like True better

02:05:06 because True has like, you can be true hearted, right?

02:05:09 You can be true to your girlfriend.

02:05:11 So True has a number and it also has logic in it, right?

02:05:15 Because logic is a key part of the system.

02:05:18 So yeah, with the True AGI system,

02:05:22 we’re sticking with the same basic architecture,

02:05:25 but we’re trying to build on what we’ve learned.

02:05:29 And one thing we’ve learned is that,

02:05:32 we need type checking among dependent types

02:05:36 to be much faster

02:05:38 and among probabilistic dependent types to be much faster.

02:05:41 So as it is now,

02:05:43 you can have complex types on the nodes and links.

02:05:47 But if you wanna put,

02:05:48 like if you want types to be first class citizens,

02:05:51 so that you can have the types can be variables

02:05:53 and then you do type checking

02:05:55 among complex higher order types.

02:05:58 You can do that in the system now, but it’s very slow.

02:06:00 This is stuff like it’s done

02:06:02 in cutting edge program languages like Agda or something,

02:06:05 these obscure research languages.

02:06:07 On the other hand,

02:06:08 we’ve been doing a lot tying together deep neural nets

02:06:11 with symbolic learning.

02:06:12 So we did a project for Cisco, for example,

02:06:15 which was on, this was street scene analysis,

02:06:17 but they had deep neural models

02:06:18 for a bunch of cameras watching street scenes,

02:06:21 but they trained a different model for each camera

02:06:23 because they couldn’t get the transfer learning

02:06:24 to work between camera A and camera B.

02:06:27 So we took what came out of all the deep neural models

02:06:29 for the different cameras,

02:06:30 we fed it into an open called symbolic representation.

02:06:33 Then we did some pattern mining and some reasoning

02:06:36 on what came out of all the different cameras

02:06:38 within the symbolic graph.

02:06:39 And that worked well for that application.

02:06:42 I mean, Hugo Latapie from Cisco gave a talk touching on that

02:06:45 at last year’s AGI conference, it was in Shenzhen.

02:06:48 On the other hand, we learned from there,

02:06:51 it was kind of clunky to get the deep neural models

02:06:53 to work well with the symbolic system

02:06:55 because we were using torch.

02:06:58 And torch keeps a sort of state computation graph,

02:07:03 but you needed like real time access

02:07:05 to that computation graph within our hypergraph.

02:07:07 And we certainly did it,

02:07:10 Alexey Polopov who leads our St. Petersburg team

02:07:13 wrote a great paper on cognitive modules in OpenCog

02:07:16 explaining sort of how do you deal

02:07:17 with the torch compute graph inside OpenCog.

02:07:19 But in the end we realized like,

02:07:22 that just hadn’t been one of our design thoughts

02:07:25 when we built OpenCog, right?

02:07:27 So between wanting really fast dependent type checking

02:07:30 and wanting much more efficient interoperation

02:07:33 between the computation graphs

02:07:35 of deep neural net frameworks and OpenCog’s hypergraph

02:07:37 and adding on top of that,

02:07:40 wanting to more effectively run an OpenCog hypergraph

02:07:42 distributed across RAM in 10,000 machines,

02:07:45 which is we’re doing dozens of machines now,

02:07:47 but it’s just not, we didn’t architect it

02:07:50 with that sort of modern scalability in mind.

02:07:53 So these performance requirements are what have driven us

02:07:56 to want to rearchitect the base,

02:08:00 but the core AGI paradigm doesn’t really change.

02:08:05 Like the mathematics is the same.

02:08:07 It’s just, we can’t scale to the level that we want

02:08:11 in terms of distributed processing

02:08:13 or speed of various kinds of processing

02:08:16 with the current infrastructure

02:08:19 that was built in the phase 2001 to 2008,

02:08:22 which is hardly shocking.

02:08:26 Well, I mean, the three things you mentioned

02:08:27 are really interesting.

02:08:28 So what do you think about in terms of interoperability

02:08:32 communicating with computational graph of neural networks?

02:08:36 What do you think about the representations

02:08:38 that neural networks form?

02:08:40 They’re bad, but there’s many ways

02:08:42 that you could deal with that.

02:08:44 So I’ve been wrestling with this a lot

02:08:46 in some work on supervised grammar induction,

02:08:49 and I have a simple paper on that.

02:08:52 They’ll give it the next AGI conference,

02:08:55 online portion of which is next week, actually.

02:08:58 What is grammar induction?

02:09:00 So this isn’t AGI either,

02:09:02 but it’s sort of on the verge

02:09:05 between narrow AI and AGI or something.

02:09:08 Unsupervised grammar induction is the problem.

02:09:11 Throw your AI system, a huge body of text,

02:09:15 and have it learn the grammar of the language

02:09:18 that produced that text.

02:09:20 So you’re not giving it labeled examples.

02:09:22 So you’re not giving it like a thousand sentences

02:09:24 where the parses were marked up by graduate students.

02:09:27 So it’s just got to infer the grammar from the text.

02:09:30 It’s like the Rosetta Stone, but worse, right?

02:09:33 Because you only have the one language,

02:09:35 and you have to figure out what is the grammar.

02:09:37 So that’s not really AGI because,

02:09:41 I mean, the way a human learns language is not that, right?

02:09:44 I mean, we learn from language that’s used in context.

02:09:47 So it’s a social embodied thing.

02:09:49 We see how a given sentence is grounded in observation.

02:09:53 There’s an interactive element, I guess.

02:09:55 Yeah, yeah, yeah.

02:09:56 On the other hand, so I’m more interested in that.

02:10:00 I’m more interested in making an AGI system learn language

02:10:02 from its social and embodied experience.

02:10:05 On the other hand, that’s also more of a pain to do,

02:10:08 and that would lead us into Hanson Robotics

02:10:10 and their robotics work I’ve known much.

02:10:12 We’ll talk about it in a few minutes.

02:10:14 But just as an intellectual exercise,

02:10:17 as a learning exercise,

02:10:18 trying to learn grammar from a corpus

02:10:22 is very, very interesting, right?

02:10:24 And that’s been a field in AI for a long time.

02:10:27 No one can do it very well.

02:10:29 So we’ve been looking at transformer neural networks

02:10:32 and tree transformers, which are amazing.

02:10:35 These came out of Google Brain, actually.

02:10:39 And actually on that team was Lucas Kaiser,

02:10:41 who used to work for me in the one,

02:10:44 the period 2005 through eight or something.

02:10:46 So it’s been fun to see my former

02:10:50 sort of AGI employees disperse and do

02:10:52 all these amazing things.

02:10:54 Way too many sucked into Google, actually.

02:10:56 Well, yeah, anyway.

02:10:57 We’ll talk about that too.

02:10:58 Lucas Kaiser and a bunch of these guys,

02:11:00 they create transformer networks,

02:11:03 that classic paper like attention is all you need

02:11:05 and all these things following on from that.

02:11:08 So we’re looking at transformer networks.

02:11:10 And like, these are able to,

02:11:13 I mean, this is what underlies GPT2 and GPT3 and so on,

02:11:16 which are very, very cool

02:11:18 and have absolutely no cognitive understanding

02:11:20 of any of the texts they’re looking at.

02:11:21 Like they’re very intelligent idiots, right?

02:11:24 So sorry to take, but this small, I’ll bring this back,

02:11:28 but do you think GPT3 understands language?

02:11:31 No, no, it understands nothing.

02:11:34 It’s a complete idiot.

02:11:35 But it’s a brilliant idiot.

02:11:36 You don’t think GPT20 will understand language?

02:11:40 No, no, no.

02:11:42 So size is not gonna buy you understanding.

02:11:45 And any more than a faster car is gonna get you to Mars.

02:11:48 It’s a completely different kind of thing.

02:11:50 I mean, these networks are very cool.

02:11:54 And as an entrepreneur,

02:11:55 I can see many highly valuable uses for them.

02:11:57 And as an artist, I love them, right?

02:12:01 So I mean, we’re using our own neural model,

02:12:05 which is along those lines

02:12:06 to control the Philip K. Dick robot now.

02:12:09 And it’s amazing to like train a neural model

02:12:12 on the robot Philip K. Dick

02:12:14 and see it come up with like crazed,

02:12:15 stoned philosopher pronouncements,

02:12:18 very much like what Philip K. Dick might’ve said, right?

02:12:21 Like these models are super cool.

02:12:24 And I’m working with Hanson Robotics now

02:12:27 on using a similar, but more sophisticated one for Sophia,

02:12:30 which we haven’t launched yet.

02:12:34 But so I think it’s cool.

02:12:36 But no, these are recognizing a large number

02:12:39 of shallow patterns.

02:12:42 They’re not forming an abstract representation.

02:12:44 And that’s the point I was coming to

02:12:47 when we’re looking at grammar induction,

02:12:50 we tried to mine patterns out of the structure

02:12:53 of the transformer network.

02:12:55 And you can, but the patterns aren’t what you want.

02:12:59 They’re nasty.

02:13:00 So I mean, if you do supervised learning,

02:13:03 if you look at sentences where you know

02:13:04 the correct parts of a sentence,

02:13:06 you can learn a matrix that maps

02:13:09 between the internal representation of the transformer

02:13:12 and the parse of the sentence.

02:13:14 And so then you can actually train something

02:13:16 that will output the sentence parse

02:13:18 from the transformer network’s internal state.

02:13:20 And we did this, I think Christopher Manning,

02:13:24 some others have not done this also.

02:13:28 But I mean, what you get is that the representation

02:13:30 is hardly ugly and is scattered all over the network

02:13:33 and doesn’t look like the rules of grammar

02:13:34 that you know are the right rules of grammar, right?

02:13:37 It’s kind of ugly.

02:13:38 So what we’re actually doing is we’re using

02:13:41 a symbolic grammar learning algorithm,

02:13:44 but we’re using the transformer neural network

02:13:46 as a sentence probability oracle.

02:13:48 So like if you have a rule of grammar

02:13:52 and you aren’t sure if it’s a correct rule of grammar or not,

02:13:54 you can generate a bunch of sentences

02:13:56 using that rule of grammar

02:13:58 and a bunch of sentences violating that rule of grammar.

02:14:00 And you can see the transformer model

02:14:04 doesn’t think the sentences obeying the rule of grammar

02:14:06 are more probable than the sentences

02:14:08 disobeying the rule of grammar.

02:14:10 So in that way, you can use the neural model

02:14:11 as a sense probability oracle

02:14:13 to guide a symbolic grammar learning process.

02:14:19 And that seems to work better than trying to milk

02:14:24 the grammar out of the neural network

02:14:25 that doesn’t have it in there.

02:14:26 So I think the thing is these neural nets

02:14:29 are not getting a semantically meaningful representation

02:14:32 internally by and large.

02:14:35 So one line of research is to try to get them to do that.

02:14:38 And InfoGAN was trying to do that.

02:14:40 So like if you look back like two years ago,

02:14:43 there was all these papers on like at Edward,

02:14:45 this probabilistic programming neural net framework

02:14:47 that Google had, which came out of InfoGAN.

02:14:49 So the idea there was like you could train

02:14:53 an InfoGAN neural net model,

02:14:55 which is a generative associative network

02:14:57 to recognize and generate faces.

02:14:59 And the model would automatically learn a variable

02:15:02 for how long the nose is and automatically learn a variable

02:15:04 for how wide the eyes are

02:15:05 or how big the lips are or something, right?

02:15:08 So it automatically learned these variables,

02:15:11 which have a semantic meaning.

02:15:12 So that was a rare case where a neural net

02:15:15 trained with a fairly standard GAN method

02:15:18 was able to actually learn the semantic representation.

02:15:20 So for many years, many of us tried to take that

02:15:23 the next step and get a GAN type neural network

02:15:27 that would have not just a list of semantic latent variables,

02:15:31 but would have say a Bayes net of semantic latent variables

02:15:33 with dependencies between them.

02:15:35 The whole programming framework Edward was made for that.

02:15:38 I mean, no one got it to work, right?

02:15:40 And it could be.

02:15:41 Do you think it’s possible?

02:15:42 Yeah, do you think?

02:15:43 I don’t know.

02:15:44 It might be that back propagation just won’t work for it

02:15:47 because the gradients are too screwed up.

02:15:49 Maybe you could get it to work using CMAES

02:15:52 or some like floating point evolutionary algorithm.

02:15:54 We tried, we didn’t get it to work.

02:15:57 Eventually we just paused that rather than gave it up.

02:16:01 We paused that and said, well, okay, let’s try

02:16:04 more innovative ways to learn implicit,

02:16:08 to learn what are the representations implicit

02:16:11 in that network without trying to make it grow

02:16:13 inside that network.

02:16:14 And I described how we’re doing that in language.

02:16:19 You can do similar things in vision, right?

02:16:21 So what?

02:16:22 Use it as an oracle.

02:16:23 Yeah, yeah, yeah.

02:16:24 So you can, that’s one way is that you use

02:16:26 a structure learning algorithm, which is symbolic.

02:16:29 And then you use the deep neural net as an oracle

02:16:32 to guide the structure learning algorithm.

02:16:34 The other way to do it is like Infogam was trying to do

02:16:37 and try to tweak the neural network

02:16:40 to have the symbolic representation inside it.

02:16:43 I tend to think what the brain is doing

02:16:46 is more like using the deep neural net type thing

02:16:51 as an oracle.

02:16:52 I think the visual cortex or the cerebellum

02:16:56 are probably learning a non semantically meaningful

02:17:00 opaque tangled representation.

02:17:02 And then when they interface with the more cognitive parts

02:17:04 of the cortex, the cortex is sort of using those

02:17:08 as an oracle and learning the abstract representation.

02:17:10 So if you do sports, say take for example,

02:17:13 serving in tennis, right?

02:17:15 I mean, my tennis serve is okay, not great,

02:17:17 but I learned it by trial and error, right?

02:17:19 And I mean, I learned music by trial and error too.

02:17:22 I just sit down and play, but then if you’re an athlete,

02:17:25 which I’m not a good athlete,

02:17:27 I mean, then you’ll watch videos of yourself serving

02:17:30 and your coach will help you think about what you’re doing

02:17:32 and you’ll then form a declarative representation,

02:17:35 but your cerebellum maybe didn’t have

02:17:37 a declarative representation.

02:17:38 Same way with music, like I will hear something in my head,

02:17:43 I’ll sit down and play the thing like I heard it.

02:17:46 And then I will try to study what my fingers did

02:17:51 to see like, what did you just play?

02:17:52 Like how did you do that, right?

02:17:55 Because if you’re composing,

02:17:57 you may wanna see how you did it

02:17:59 and then declaratively morph that in some way

02:18:02 that your fingers wouldn’t think of, right?

02:18:05 But the physiological movement may come out of some opaque,

02:18:10 like cerebellar reinforcement learned thing, right?

02:18:14 And so that’s, I think trying to milk the structure

02:18:17 of a neural net by treating it as an oracle,

02:18:19 maybe more like how your declarative mind post processes

02:18:23 what your visual or motor cortex.

02:18:27 I mean, in vision, it’s the same way,

02:18:29 like you can recognize beautiful art

02:18:34 much better than you can say why

02:18:36 you think that piece of art is beautiful.

02:18:38 But if you’re trained as an art critic,

02:18:40 you do learn to say why.

02:18:41 And some of it’s bullshit, but some of it isn’t, right?

02:18:44 Some of it is learning to map sensory knowledge

02:18:46 into declarative and linguistic knowledge,

02:18:51 yet without necessarily making the sensory system itself

02:18:56 use a transparent and an easily communicable representation.

02:19:00 Yeah, that’s fascinating to think of neural networks

02:19:02 as like dumb question answers that you can just milk

02:19:08 to build up a knowledge base.

02:19:10 And then it can be multiple networks, I suppose,

02:19:12 from different.

02:19:13 Yeah, yeah, so I think if a group like DeepMind or OpenAI

02:19:18 were to build AGI, and I think DeepMind is like

02:19:21 a thousand times more likely from what I could tell,

02:19:25 because they’ve hired a lot of people with broad minds

02:19:30 and many different approaches and angles on AGI,

02:19:34 whereas OpenAI is also awesome,

02:19:36 but I see them as more of like a pure

02:19:39 deep reinforcement learning shop.

02:19:41 Yeah, this time, I got you.

02:19:42 So far. Yeah, there’s a lot of,

02:19:43 you’re right, I mean, there’s so much interdisciplinary

02:19:48 work at DeepMind, like neuroscience.

02:19:50 And you put that together with Google Brain,

02:19:52 which granted they’re not working that closely together now,

02:19:54 but my oldest son Zarathustra is doing his PhD

02:19:58 in machine learning applied to automated theorem proving

02:20:01 in Prague under Josef Urban.

02:20:03 So the first paper, DeepMath, which applied deep neural nets

02:20:08 to guide theorem proving was out of Google Brain.

02:20:10 I mean, by now, the automated theorem proving community

02:20:14 is going way, way, way beyond anything Google was doing,

02:20:18 but still, yeah, but anyway,

02:20:21 if that community was gonna make an AGI,

02:20:23 probably one way they would do it was,

02:20:27 take 25 different neural modules,

02:20:30 architected in different ways,

02:20:32 maybe resembling different parts of the brain,

02:20:33 like a basal ganglia model, cerebellum model,

02:20:36 a thalamus module, a few hippocampus models,

02:20:40 number of different models,

02:20:41 representing parts of the cortex, right?

02:20:43 Take all of these and then wire them together

02:20:47 to co train and learn them together like that.

02:20:52 That would be an approach to creating an AGI.

02:20:57 One could implement something like that efficiently

02:20:59 on top of our true AGI, like OpenCog 2.0 system,

02:21:03 once it exists, although obviously Google

02:21:06 has their own highly efficient implementation architecture.

02:21:10 So I think that’s a decent way to build AGI.

02:21:13 I was very interested in that in the mid 90s,

02:21:15 but I mean, the knowledge about how the brain works

02:21:19 sort of pissed me off, like it wasn’t there yet.

02:21:21 Like, you know, in the hippocampus,

02:21:23 you have these concept neurons,

02:21:24 like the so called grandmother neuron,

02:21:26 which everyone laughed at it, it’s actually there.

02:21:28 Like I have some Lex Friedman neurons

02:21:31 that fire differentially when I see you

02:21:33 and not when I see any other person, right?

02:21:35 So how do these Lex Friedman neurons,

02:21:38 how do they coordinate with the distributed representation

02:21:41 of Lex Friedman I have in my cortex, right?

02:21:44 There’s some back and forth between cortex and hippocampus

02:21:47 that lets these discrete symbolic representations

02:21:50 in hippocampus correlate and cooperate

02:21:53 with the distributed representations in cortex.

02:21:55 This probably has to do with how the brain

02:21:57 does its version of abstraction and quantifier logic, right?

02:22:00 Like you can have a single neuron in the hippocampus

02:22:02 that activates a whole distributed activation pattern

02:22:05 in cortex, well, this may be how the brain does

02:22:09 like symbolization and abstraction

02:22:11 as in functional programming or something,

02:22:14 but we can’t measure it.

02:22:15 Like we don’t have enough electrodes stuck

02:22:17 between the cortex and the hippocampus

02:22:20 in any known experiment to measure it.

02:22:23 So I got frustrated with that direction,

02:22:26 not because it’s impossible.

02:22:27 Because we just don’t understand enough yet.

02:22:29 Of course, it’s a valid research direction.

02:22:31 You can try to understand more and more.

02:22:33 And we are measuring more and more

02:22:34 about what happens in the brain now than ever before.

02:22:38 So it’s quite interesting.

02:22:40 On the other hand, I sort of got more

02:22:43 of an engineering mindset about AGI.

02:22:46 I’m like, well, okay,

02:22:47 we don’t know how the brain works that well.

02:22:50 We don’t know how birds fly that well yet either.

02:22:52 We have no idea how a hummingbird flies

02:22:54 in terms of the aerodynamics of it.

02:22:56 On the other hand, we know basic principles

02:22:59 of like flapping and pushing the air down.

02:23:01 And we know the basic principles

02:23:03 of how the different parts of the brain work.

02:23:05 So let’s take those basic principles

02:23:07 and engineer something that embodies those basic principles,

02:23:11 but is well designed for the hardware

02:23:14 that we have on hand right now.

02:23:18 So do you think we can create AGI

02:23:20 before we understand how the brain works?

02:23:22 I think that’s probably what will happen.

02:23:25 And maybe the AGI will help us do better brain imaging

02:23:28 that will then let us build artificial humans,

02:23:30 which is very, very interesting to us

02:23:33 because we are humans, right?

02:23:34 I mean, building artificial humans is super worthwhile.

02:23:38 I just think it’s probably not the shortest path to AGI.

02:23:42 So it’s fascinating idea that we would build AGI

02:23:45 to help us understand ourselves.

02:23:50 A lot of people ask me if the young people

02:23:54 interested in doing artificial intelligence,

02:23:56 they look at sort of doing graduate level, even undergrads,

02:24:01 but graduate level research and they see

02:24:04 whether the artificial intelligence community stands now,

02:24:06 it’s not really AGI type research for the most part.

02:24:09 So the natural question they ask is

02:24:12 what advice would you give?

02:24:13 I mean, maybe I could ask if people were interested

02:24:17 in working on OpenCog or in some kind of direct

02:24:22 or indirect connection to OpenCog or AGI research,

02:24:25 what would you recommend?

02:24:28 OpenCog, first of all, is open source project.

02:24:30 There’s a Google group discussion list.

02:24:35 There’s a GitHub repository.

02:24:36 So if anyone’s interested in lending a hand

02:24:39 with that aspect of AGI,

02:24:42 introduce yourself on the OpenCog email list.

02:24:46 And there’s a Slack as well.

02:24:47 I mean, we’re certainly interested to have inputs

02:24:53 into our redesign process for a new version of OpenCog,

02:24:57 but also we’re doing a lot of very interesting research.

02:25:01 I mean, we’re working on data analysis

02:25:04 for COVID clinical trials.

02:25:05 We’re working with Hanson Robotics.

02:25:06 We’re doing a lot of cool things

02:25:08 with the current version of OpenCog now.

02:25:10 So there’s certainly opportunity to jump into OpenCog

02:25:14 or various other open source AGI oriented projects.

02:25:18 So would you say there’s like masters

02:25:20 and PhD theses in there?

02:25:22 Plenty, yeah, plenty, of course.

02:25:23 I mean, the challenge is to find a supervisor

02:25:26 who wants to foster that sort of research,

02:25:29 but it’s way easier than it was when I got my PhD, right?

02:25:32 It’s okay, great.

02:25:33 We talked about OpenCog, which is kind of one,

02:25:36 the software framework,

02:25:38 but also the actual attempt to build an AGI system.

02:25:44 And then there is this exciting idea of SingularityNet.

02:25:48 So maybe can you say first what is SingularityNet?

02:25:53 Sure, sure.

02:25:54 SingularityNet is a platform

02:25:59 for realizing a decentralized network

02:26:05 of artificial intelligences.

02:26:08 So Marvin Minsky, the AI pioneer who I knew a little bit,

02:26:14 he had the idea of a society of minds,

02:26:16 like you should achieve an AI

02:26:18 not by writing one algorithm or one program,

02:26:21 but you should put a bunch of different AIs out there

02:26:24 and the different AIs will interact with each other,

02:26:27 each playing their own role.

02:26:29 And then the totality of the society of AIs

02:26:32 would be the thing

02:26:34 that displayed the human level intelligence.

02:26:36 And I had, when he was alive,

02:26:39 I had many debates with Marvin about this idea.

02:26:43 And I think he really thought the mind

02:26:49 was more like a society than I do.

02:26:51 Like I think you could have a mind

02:26:54 that was as disorganized as a human society,

02:26:56 but I think a human like mind

02:26:57 has a bit more central control than that actually.

02:27:00 Like, I mean, we have this thalamus

02:27:02 and the medulla and limbic system.

02:27:04 We have a sort of top down control system

02:27:07 that guides much of what we do,

02:27:10 more so than a society does.

02:27:12 So I think he stretched that metaphor a little too far,

02:27:16 but I also think there’s something interesting there.

02:27:20 And so in the 90s,

02:27:24 when I started my first sort of nonacademic AI project,

02:27:27 WebMind, which was an AI startup in New York

02:27:30 in the Silicon Alley area in the late 90s,

02:27:34 what I was aiming to do there

02:27:36 was make a distributed society of AIs,

02:27:40 the different parts of which would live

02:27:41 on different computers all around the world.

02:27:43 And each one would do its own thinking

02:27:45 about the data local to it,

02:27:47 but they would all share information with each other

02:27:48 and outsource work with each other and cooperate.

02:27:51 And the intelligence would be in the whole collective.

02:27:54 And I organized a conference together with Francis Heiligen

02:27:57 at Free University of Brussels in 2001,

02:28:00 which was the Global Brain Zero Conference.

02:28:02 And we’re planning the next version,

02:28:04 the Global Brain One Conference

02:28:06 at the Free University of Brussels for next year, 2021.

02:28:10 So 20 years after.

02:28:12 And then maybe we can have the next one 10 years after that,

02:28:14 like exponentially faster until the singularity comes, right?

02:28:19 The timing is right, yeah.

02:28:20 Yeah, yeah, exactly.

02:28:22 So yeah, the idea with the Global Brain

02:28:25 was maybe the AI won’t just be in a program

02:28:28 on one guy’s computer,

02:28:29 but the AI will be in the internet as a whole

02:28:32 with the cooperation of different AI modules

02:28:35 living in different places.

02:28:37 So one of the issues you face

02:28:39 when architecting a system like that

02:28:41 is, you know, how is the whole thing controlled?

02:28:44 Do you have like a centralized control unit

02:28:47 that pulls the puppet strings

02:28:48 of all the different modules there?

02:28:50 Or do you have a fundamentally decentralized network

02:28:55 where the society of AIs is controlled

02:28:59 in some democratic and self organized way,

02:29:01 but all the AIs in that society, right?

02:29:04 And Francis and I had different view of many things,

02:29:08 but we both wanted to make like a global society

02:29:13 of AI minds with a decentralized organizational mode.

02:29:19 Now, the main difference was he wanted the individual AIs

02:29:25 to be all incredibly simple

02:29:27 and all the intelligence to be on the collective level.

02:29:30 Whereas I thought that was cool,

02:29:32 but I thought a more practical way to do it might be

02:29:35 if some of the agents in the society of minds

02:29:39 were fairly generally intelligent on their own.

02:29:41 So like you could have a bunch of open cogs out there

02:29:44 and a bunch of simpler learning systems.

02:29:47 And then these are all cooperating, coordinating together

02:29:49 sort of like in the brain.

02:29:51 Okay, the brain as a whole is the general intelligence,

02:29:55 but some parts of the cortex,

02:29:56 you could say have a fair bit of general intelligence

02:29:58 on their own,

02:29:59 whereas say parts of the cerebellum or limbic system

02:30:02 have very little general intelligence on their own.

02:30:04 And they’re contributing to general intelligence

02:30:07 by way of their connectivity to other modules.

02:30:10 Do you see instantiations of the same kind of,

02:30:13 maybe different versions of open cog,

02:30:15 but also just the same version of open cog

02:30:17 and maybe many instantiations of it as being all parts of it?

02:30:21 That’s what David and Hans and I want to do

02:30:23 with many Sophia and other robots.

02:30:25 Each one has its own individual mind living on the server,

02:30:29 but there’s also a collective intelligence infusing them

02:30:32 and a part of the mind living on the edge in each robot.

02:30:35 So the thing is at that time,

02:30:38 as well as WebMind being implemented in Java 1.1

02:30:41 as like a massive distributed system,

02:30:46 blockchain wasn’t there yet.

02:30:48 So had them do this decentralized control.

02:30:51 We sort of knew it.

02:30:52 We knew about distributed systems.

02:30:54 We knew about encryption.

02:30:55 So I mean, we had the key principles

02:30:58 of what underlies blockchain now,

02:31:00 but I mean, we didn’t put it together

02:31:01 in the way that it’s been done now.

02:31:02 So when Vitalik Buterin and colleagues

02:31:05 came out with Ethereum blockchain,

02:31:08 many, many years later, like 2013 or something,

02:31:11 then I was like, well, this is interesting.

02:31:13 Like this is solidity scripting language.

02:31:17 It’s kind of dorky in a way.

02:31:18 And I don’t see why you need to turn complete language

02:31:21 for this purpose.

02:31:22 But on the other hand,

02:31:24 this is like the first time I could sit down

02:31:27 and start to like script infrastructure

02:31:29 for decentralized control of the AIs

02:31:32 in this society of minds in a tractable way.

02:31:35 Like you can hack the Bitcoin code base,

02:31:37 but it’s really annoying.

02:31:38 Whereas solidity is Ethereum scripting language

02:31:41 is just nicer and easier to use.

02:31:44 I’m very annoyed with it by this point.

02:31:45 But like Java, I mean, these languages are amazing

02:31:49 when they first come out.

02:31:50 So then I came up with the idea

02:31:52 that turned into SingularityNet.

02:31:53 Okay, let’s make a decentralized agent system

02:31:58 where a bunch of different AIs,

02:32:00 wrapped up in say different Docker containers

02:32:02 or LXC containers,

02:32:04 different AIs can each of them have their own identity

02:32:07 on the blockchain.

02:32:08 And the coordination of this community of AIs

02:32:11 has no central controller, no dictator, right?

02:32:14 And there’s no central repository of information.

02:32:17 The coordination of the society of minds

02:32:19 is done entirely by the decentralized network

02:32:22 in a decentralized way by the algorithms, right?

02:32:25 Because the model of Bitcoin is in math we trust, right?

02:32:29 And so that’s what you need.

02:32:30 You need the society of minds to trust only in math,

02:32:33 not trust only in one centralized server.

02:32:37 So the AI systems themselves are outside of the blockchain,

02:32:40 but then the communication between them.

02:32:41 At the moment, yeah, yeah.

02:32:43 I would have loved to put the AI’s operations on chain

02:32:46 in some sense, but in Ethereum, it’s just too slow.

02:32:50 You can’t do it.

02:32:52 Somehow it’s the basic communication between AI systems.

02:32:56 That’s the distribution.

02:32:58 Basically an AI is just some software in singularity.

02:33:02 An AI is just some software process living in a container.

02:33:05 And there’s a proxy that lives in that container

02:33:09 along with the AI that handles the interaction

02:33:10 with the rest of singularity net.

02:33:13 And then when one AI wants to contribute

02:33:15 with another one in the network,

02:33:16 they set up a number of channels.

02:33:18 And the setup of those channels uses the Ethereum blockchain.

02:33:22 Once the channels are set up,

02:33:24 then data flows along those channels

02:33:26 without having to be on the blockchain.

02:33:29 All that goes on the blockchain is the fact

02:33:31 that some data went along that channel.

02:33:33 So you can do…

02:33:34 So there’s not a shared knowledge.

02:33:38 Well, the identity of each agent is on the blockchain,

02:33:43 on the Ethereum blockchain.

02:33:44 If one agent rates the reputation of another agent,

02:33:48 that goes on the blockchain.

02:33:49 And agents can publish what APIs they will fulfill

02:33:52 on the blockchain.

02:33:54 But the actual data for AI and the results for AI

02:33:58 is not on the blockchain.

02:33:58 Do you think it could be?

02:33:59 Do you think it should be?

02:34:02 In some cases, it should be.

02:34:04 In some cases, maybe it shouldn’t be.

02:34:05 But I mean, I think that…

02:34:09 So I’ll give you an example.

02:34:10 Using Ethereum, you can’t do it.

02:34:11 Using now, there’s more modern and faster blockchains

02:34:16 where you could start to do that in some cases.

02:34:21 Two years ago, that was less so.

02:34:23 It’s a very rapidly evolving ecosystem.

02:34:25 So like one example, maybe you can comment on

02:34:28 something I worked a lot on is autonomous vehicles.

02:34:31 You can see each individual vehicle as an AI system.

02:34:35 And you can see vehicles from Tesla, for example,

02:34:39 and then Ford and GM and all these as also like larger…

02:34:44 I mean, they all are running the same kind of system

02:34:47 on each sets of vehicles.

02:34:49 So it’s individual AI systems and individual vehicles,

02:34:52 but it’s all different.

02:34:53 The station is the same AI system within the same company.

02:34:57 So you can envision a situation where all of those AI systems

02:35:02 are put on SingularityNet, right?

02:35:05 And how do you see that happening?

02:35:10 And what would be the benefit?

02:35:11 And could they share data?

02:35:13 I guess one of the biggest things is that the power there’s

02:35:16 in a decentralized control, but the benefit would have been,

02:35:20 is really nice if they can somehow share the knowledge

02:35:24 in an open way if they choose to.

02:35:26 Yeah, yeah, yeah, those are all quite good points.

02:35:29 So I think the benefit from being on the decentralized network

02:35:37 as we envision it is that we want the AIs in the network

02:35:41 to be outsourcing work to each other

02:35:43 and making API calls to each other frequently.

02:35:47 So the real benefit would be if that AI wanted to outsource

02:35:51 some cognitive processing or data processing

02:35:54 or data pre processing, whatever,

02:35:56 to some other AIs in the network,

02:35:59 which specialize in something different.

02:36:01 And this really requires a different way of thinking

02:36:06 about AI software development, right?

02:36:07 So just like object oriented programming

02:36:10 was different than imperative programming.

02:36:12 And now object oriented programmers all use these

02:36:16 frameworks to do things rather than just libraries even.

02:36:20 You know, shifting to agent based programming

02:36:23 where AI agent is asking other like live real time

02:36:26 evolving agents for feedback and what they’re doing.

02:36:29 That’s a different way of thinking.

02:36:31 I mean, it’s not a new one.

02:36:32 There was loads of papers on agent based programming

02:36:35 in the 80s and onward.

02:36:37 But if you’re willing to shift to an agent based model

02:36:41 of development, then you can put less and less in your AI

02:36:45 and rely more and more on interactive calls

02:36:48 to other AIs running in the network.

02:36:51 And of course, that’s not fully manifested yet

02:36:54 because although we’ve rolled out a nice working version

02:36:57 of SingularityNet platform,

02:36:59 there’s only 50 to 100 AIs running in there now.

02:37:03 There’s not tens of thousands of AIs.

02:37:05 So we don’t have the critical mass

02:37:08 for the whole society of mind to be doing

02:37:11 what we want to do.

02:37:11 Yeah, the magic really happens

02:37:13 when there’s just a huge number of agents.

02:37:15 Yeah, yeah, exactly.

02:37:16 In terms of data, we’re partnering closely

02:37:19 with another blockchain project called Ocean Protocol.

02:37:23 And Ocean Protocol, that’s the project of Trent McConnachie

02:37:27 who developed BigchainDB,

02:37:28 which is a blockchain based database.

02:37:30 So Ocean Protocol is basically blockchain based big data

02:37:35 and aims at making it efficient for different AI processes

02:37:39 or statistical processes or whatever

02:37:41 to share large data sets.

02:37:44 Or if one process can send a clone of itself

02:37:46 to work on the other guy’s data set

02:37:48 and send results back and so forth.

02:37:50 So by getting Ocean and you have data lake,

02:37:55 so this is the data ocean, right?

02:37:56 So again, by getting Ocean and SingularityNet

02:37:59 to interoperate, we’re aiming to take into account

02:38:03 the big data aspect also.

02:38:05 But it’s quite challenging

02:38:08 because to build this whole decentralized

02:38:10 blockchain based infrastructure,

02:38:12 I mean, your competitors are like Google, Microsoft,

02:38:14 Alibaba and Amazon, which have so much money

02:38:17 to put behind their centralized infrastructures,

02:38:20 plus they’re solving simpler algorithmic problems

02:38:23 because making it centralized in some ways is easier, right?

02:38:27 So they’re very major computer science challenges.

02:38:32 And I think what you saw with the whole ICO boom

02:38:35 in the blockchain and cryptocurrency world

02:38:37 is a lot of young hackers who were hacking Bitcoin

02:38:42 or Ethereum, and they see, well,

02:38:43 why don’t we make this decentralized on blockchain?

02:38:46 Then after they raised some money through an ICO,

02:38:48 they realize how hard it is.

02:38:49 And it’s like, actually we’re wrestling

02:38:52 with incredibly hard computer science

02:38:54 and software engineering and distributed systems problems,

02:38:58 which can be solved, but they’re just very difficult

02:39:02 to solve.

02:39:03 And in some cases, the individuals who started

02:39:05 those projects were not well equipped

02:39:08 to actually solve the problems that they wanted to solve.

02:39:12 So you think, would you say that’s the main bottleneck?

02:39:14 If you look at the future of currency,

02:39:19 the question is, well…

02:39:21 Currency, the main bottleneck is politics.

02:39:23 It’s governments and the bands of armed thugs

02:39:26 that will shoot you if you bypass their currency restriction.

02:39:29 That’s right.

02:39:30 So like your sense is that versus the technical challenges,

02:39:33 because you kind of just suggested

02:39:34 the technical challenges are quite high as well.

02:39:36 I mean, for making a distributed money,

02:39:39 you could do that on Algorand right now.

02:39:41 I mean, so that while Ethereum is too slow,

02:39:44 there’s Algorand and there’s a few other more modern,

02:39:47 more scalable blockchains that would work fine

02:39:49 for a decentralized global currency.

02:39:53 So I think there were technical bottlenecks

02:39:56 to that two years ago.

02:39:57 And maybe Ethereum 2.0 will be as fast as Algorand.

02:40:00 I don’t know, that’s not fully written yet, right?

02:40:04 So I think the obstacle to currency

02:40:07 being put on the blockchain is that…

02:40:09 Is the other stuff you mentioned.

02:40:10 I mean, currency will be on the blockchain.

02:40:11 It’ll just be on the blockchain in a way

02:40:13 that enforces centralized control

02:40:16 and government hedge money rather than otherwise.

02:40:18 Like the ERNB will probably be the first global,

02:40:20 the first currency on the blockchain.

02:40:22 The EURUBIL maybe next.

02:40:23 There are any…

02:40:24 EURUBIL?

02:40:25 Yeah, yeah, yeah.

02:40:25 I mean, the point is…

02:40:26 Oh, that’s hilarious.

02:40:27 Digital currency, you know, makes total sense,

02:40:30 but they would rather do it in the way

02:40:32 that Putin and Xi Jinping have access

02:40:34 to the global keys for everything, right?

02:40:37 So, and then the analogy to that in terms of SingularityNet,

02:40:42 I mean, there’s Echoes.

02:40:43 I think you’ve mentioned before that Linux gives you hope.

02:40:47 AI is not as heavily regulated as money, right?

02:40:49 Not yet, right?

02:40:51 Not yet.

02:40:52 Oh, that’s a lot slipperier than money too, right?

02:40:54 I mean, money is easier to regulate

02:40:58 because it’s kind of easier to define,

02:41:00 whereas AI is, it’s almost everywhere inside everything.

02:41:04 Where’s the boundary between AI and software, right?

02:41:06 I mean, if you’re gonna regulate AI,

02:41:09 there’s no IQ test for every hardware device

02:41:11 that has a learning algorithm.

02:41:12 You’re gonna be putting like hegemonic regulation

02:41:15 on all software.

02:41:16 And I don’t rule out that that can happen.

02:41:18 And the adaptive software.

02:41:21 Yeah, but how do you tell if a software is adaptive

02:41:23 and what, every software is gonna be adaptive, I mean.

02:41:26 Or maybe they, maybe the, you know,

02:41:28 maybe we’re living in the golden age of open source

02:41:31 that will not always be open.

02:41:33 Maybe it’ll become centralized control

02:41:35 of software by governments.

02:41:37 It is entirely possible.

02:41:38 And part of what I think we’re doing

02:41:42 with things like SingularityNet protocol

02:41:45 is creating a tool set that can be used

02:41:50 to counteract that sort of thing.

02:41:52 Say a similar thing about mesh networking, right?

02:41:55 Plays a minor role now, the ability to access internet

02:41:59 like directly phone to phone.

02:42:01 On the other hand, if your government starts trying

02:42:03 to control your use of the internet,

02:42:06 suddenly having mesh networking there

02:42:09 can be very convenient, right?

02:42:10 And so right now, something like a decentralized

02:42:15 blockchain based AGI framework or narrow AI framework,

02:42:20 it’s cool, it’s nice to have.

02:42:22 On the other hand, if governments start trying

02:42:25 to tap down on my AI interoperating

02:42:28 with someone’s AI in Russia or somewhere, right?

02:42:31 Then suddenly having a decentralized protocol

02:42:35 that nobody owns or controls

02:42:37 becomes an extremely valuable part of the tool set.

02:42:41 And, you know, we’ve put that out there now.

02:42:43 It’s not perfect, but it operates.

02:42:46 And, you know, it’s pretty blockchain agnostic.

02:42:51 So we’re talking to Algorand about making part

02:42:53 of SingularityNet run on Algorand.

02:42:56 My good friend Tufi Saliba has a cool blockchain project

02:43:00 called Toda, which is a blockchain

02:43:02 without a distributed ledger.

02:43:03 It’s like a whole other architecture.

02:43:05 So there’s a lot of more advanced things you can do

02:43:08 in the blockchain world.

02:43:09 SingularityNet could be ported to a whole bunch of,

02:43:13 it could be made multi chain important

02:43:14 to a whole bunch of different blockchains.

02:43:17 And there’s a lot of potential and a lot of importance

02:43:21 to putting this kind of tool set out there.

02:43:23 If you compare to OpenCog, what you could see is

02:43:26 OpenCog allows tight integration of a few AI algorithms

02:43:32 that share the same knowledge store in real time, in RAM.

02:43:36 SingularityNet allows loose integration

02:43:40 of multiple different AIs.

02:43:42 They can share knowledge, but they’re mostly not gonna

02:43:45 be sharing knowledge in RAM on the same machine.

02:43:49 And I think what we’re gonna have is a network

02:43:53 of network of networks, right?

02:43:54 Like, I mean, you have the knowledge graph

02:43:57 inside the OpenCog system,

02:44:00 and then you have a network of machines

02:44:03 inside a distributed OpenCog mind,

02:44:05 but then that OpenCog will interface with other AIs

02:44:10 doing deep neural nets or custom biology data analysis

02:44:14 or whatever they’re doing in SingularityNet,

02:44:17 which is a looser integration of different AIs,

02:44:21 some of which may be their own networks, right?

02:44:24 And I think at a very loose analogy,

02:44:27 you could see that in the human body.

02:44:29 Like the brain has regions like cortex or hippocampus,

02:44:33 which tightly interconnects like cortical columns

02:44:36 within the cortex, for example.

02:44:39 Then there’s looser connection

02:44:40 within the different lobes of the brain,

02:44:42 and then the brain interconnects with the endocrine system

02:44:45 and different parts of the body even more loosely.

02:44:48 Then your body interacts even more loosely

02:44:50 with the other people that you talk to.

02:44:53 So you often have networks within networks within networks

02:44:56 with progressively looser coupling

02:44:59 as you get higher up in that hierarchy.

02:45:02 I mean, you have that in biology,

02:45:03 you have that in the internet as a just networking medium.

02:45:08 And I think that’s what we’re gonna have

02:45:10 in the network of software processes leading to AGI.

02:45:15 That’s a beautiful way to see the world.

02:45:17 Again, the same similar question is with OpenCog.

02:45:21 If somebody wanted to build an AI system

02:45:24 and plug into the SingularityNet,

02:45:27 what would you recommend?

02:45:28 Yeah, so that’s much easier.

02:45:30 I mean, OpenCog is still a research system.

02:45:33 So it takes some expertise to, and sometimes,

02:45:36 we have tutorials, but it’s somewhat cognitively

02:45:40 labor intensive to get up to speed on OpenCog.

02:45:44 And I mean, what’s one of the things we hope to change

02:45:46 with the true AGI OpenCog 2.0 version

02:45:49 is just make the learning curve more similar

02:45:52 to TensorFlow or Torch or something.

02:45:54 Right now, OpenCog is amazingly powerful,

02:45:57 but not simple to deal with.

02:46:00 On the other hand, SingularityNet,

02:46:03 as an open platform was developed a little more

02:46:08 with usability in mind over the blockchain,

02:46:10 it’s still kind of a pain.

02:46:11 So I mean, if you’re a command line guy,

02:46:14 there’s a command line interface.

02:46:16 It’s quite easy to take any AI that has an API

02:46:20 and lives in a Docker container and put it online anywhere.

02:46:23 And then it joins the global SingularityNet.

02:46:25 And anyone who puts a request for services

02:46:28 out into the SingularityNet,

02:46:30 the peer to peer discovery mechanism will find

02:46:32 your AI and if it does what was asked,

02:46:35 it can then start a conversation with your AI

02:46:38 about whether it wants to ask your AI to do something for it,

02:46:42 how much it would cost and so on.

02:46:43 So that’s fairly simple.

02:46:46 If you wrote an AI and want it listed

02:46:50 on like official SingularityNet marketplace,

02:46:53 which is on our website,

02:46:55 then we have a publisher portal

02:46:57 and then there’s a KYC process to go through

02:47:00 because then we have some legal liability

02:47:02 for what goes on that website.

02:47:04 So in a way that’s been an education too.

02:47:07 There’s sort of two layers.

02:47:08 Like there’s the open decentralized protocol.

02:47:11 And there’s the market.

02:47:12 Yeah, anyone can use the open decentralized protocol.

02:47:15 So say some developers from Iran

02:47:17 and there’s brilliant AI guys

02:47:19 in University of Isfahan in Tehran,

02:47:21 they can put their stuff on SingularityNet protocol

02:47:24 and just like they can put something on the internet, right?

02:47:27 I don’t control it.

02:47:28 But if we’re gonna list something

02:47:29 on the SingularityNet marketplace

02:47:32 and put a little picture and a link to it,

02:47:34 then if I put some Iranian AI geniuses code on there,

02:47:38 then Donald Trump can send a bunch of jackbooted thugs

02:47:41 to my house to arrest me for doing business with Iran, right?

02:47:45 So, I mean, we already see in some ways

02:47:48 the value of having a decentralized protocol

02:47:51 because what I hope is that someone in Iran

02:47:53 will put online an Iranian SingularityNet marketplace, right?

02:47:57 Which you can pay in the cryptographic token,

02:47:59 which is not owned by any country.

02:48:01 And then if you’re in like Congo or somewhere

02:48:04 that doesn’t have any problem with Iran,

02:48:06 you can subcontract AI services

02:48:09 that you find on that marketplace, right?

02:48:11 Even though US citizens can’t by US law.

02:48:16 So right now, that’s kind of a minor point.

02:48:20 As you alluded, if regulations go in the wrong direction,

02:48:24 it could become more of a major point.

02:48:25 But I think it also is the case

02:48:28 that having these workarounds to regulations in place

02:48:31 is a defense mechanism against those regulations

02:48:35 being put into place.

02:48:36 And you can see that in the music industry, right?

02:48:39 I mean, Napster just happened and BitTorrent just happened.

02:48:43 And now most people in my kid’s generation,

02:48:45 they’re baffled by the idea of paying for music, right?

02:48:48 I mean, my dad pays for music.

02:48:51 I mean, but that because these decentralized mechanisms

02:48:55 happened and then the regulations followed, right?

02:48:58 And the regulations would be very different

02:49:01 if they’d been put into place before there was Napster

02:49:04 and BitTorrent and so forth.

02:49:05 So in the same way, we gotta put AI out there

02:49:08 in a decentralized vein and big data out there

02:49:11 in a decentralized vein now,

02:49:13 so that the most advanced AI in the world

02:49:16 is fundamentally decentralized.

02:49:18 And if that’s the case, that’s just the reality

02:49:20 the regulators have to deal with.

02:49:23 And then as in the music case,

02:49:25 they’re gonna come up with regulations

02:49:27 that sort of work with the decentralized reality.

02:49:32 Beautiful.

02:49:34 You are the chief scientist of Hanson Robotics.

02:49:37 You’re still involved with Hanson Robotics,

02:49:40 doing a lot of really interesting stuff there.

02:49:42 This is for people who don’t know the company

02:49:44 that created Sophia the Robot.

02:49:47 Can you tell me who Sophia is?

02:49:51 I’d rather start by telling you who David Hanson is.

02:49:54 Because David is the brilliant mind behind the Sophia Robot.

02:49:58 And he remains, so far, he remains more interesting

02:50:01 than his creation, although she may be improving

02:50:05 faster than he is, actually.

02:50:07 I mean, he’s a…

02:50:08 So yeah, I met David maybe 2007 or something

02:50:15 at some futurist conference we were both speaking at.

02:50:18 And I could see we had a great deal in common.

02:50:22 I mean, we were both kind of crazy,

02:50:25 but we both had a passion for AGI and the singularity.

02:50:31 And we were both huge fans of the work

02:50:33 of Philip K. Dick, the science fiction writer.

02:50:36 And I wanted to create benevolent AGI

02:50:40 that would create massively better life

02:50:44 for all humans and all sentient beings,

02:50:47 including animals, plants, and superhuman beings.

02:50:50 And David, he wanted exactly the same thing,

02:50:53 but he had a different idea of how to do it.

02:50:56 He wanted to get computational compassion.

02:50:59 Like he wanted to get machines that would love people

02:51:03 and empathize with people.

02:51:05 And he thought the way to do that was to make a machine

02:51:08 that could look people eye to eye, face to face,

02:51:12 look at people and make people love the machine,

02:51:15 and the machine loves the people back.

02:51:17 So I thought that was very different way of looking at it

02:51:21 because I’m very math oriented.

02:51:22 And I’m just thinking like,

02:51:24 what is the abstract cognitive algorithm

02:51:28 that will let the system, you know,

02:51:29 internalize the complex patterns of human values,

02:51:32 blah, blah, blah.

02:51:33 Whereas he’s like, look you in the face and the eye

02:51:35 and love you, right?

02:51:37 So we hit it off quite well.

02:51:41 And we talked to each other off and on.

02:51:44 Then I moved to Hong Kong in 2011.

02:51:49 So I’ve been living all over the place.

02:51:53 I’ve been in Australia and New Zealand in my academic career.

02:51:56 Then in Las Vegas for a while.

02:51:59 Was in New York in the late 90s

02:52:00 starting my entrepreneurial career.

02:52:03 Was in DC for nine years

02:52:05 doing a bunch of US government consulting stuff.

02:52:07 Then moved to Hong Kong in 2011,

02:52:12 mostly because I met a Chinese girl

02:52:13 who I fell in love with and we got married.

02:52:16 She’s actually not from Hong Kong.

02:52:17 She’s from mainland China,

02:52:18 but we converged together in Hong Kong.

02:52:21 Still married now, I have a two year old baby.

02:52:24 So went to Hong Kong to see about a girl, I guess.

02:52:26 Yeah, pretty much, yeah.

02:52:29 And on the other hand,

02:52:31 I started doing some cool research there

02:52:33 with Gino Yu at Hong Kong Polytechnic University.

02:52:36 I got involved with a project called IDEA

02:52:38 using machine learning for stock and futures prediction,

02:52:41 which was quite interesting.

02:52:43 And I also got to know something

02:52:45 about the consumer electronics

02:52:47 and hardware manufacturer ecosystem in Shenzhen

02:52:50 across the border,

02:52:51 which is like the only place in the world

02:52:53 that makes sense to make complex consumer electronics

02:52:56 at large scale and low cost.

02:52:57 It’s just, it’s astounding the hardware ecosystem

02:53:00 that you have in South China.

02:53:03 Like US people here cannot imagine what it’s like.

02:53:07 So David was starting to explore that also.

02:53:12 I invited him to Hong Kong to give a talk

02:53:13 at Hong Kong PolyU,

02:53:15 and I introduced him in Hong Kong to some investors

02:53:19 who were interested in his robots.

02:53:21 And he didn’t have Sophia then,

02:53:23 he had a robot of Philip K. Dick,

02:53:25 our favorite science fiction writer.

02:53:26 He had a robot Einstein,

02:53:28 he had some little toy robots

02:53:29 that looked like his son Zeno.

02:53:31 So through the investors I connected him to,

02:53:35 he managed to get some funding

02:53:37 to basically port Hanson Robotics to Hong Kong.

02:53:40 And when he first moved to Hong Kong,

02:53:42 I was working on AGI research

02:53:45 and also on this machine learning trading project.

02:53:49 So I didn’t get that tightly involved

02:53:50 with Hanson Robotics.

02:53:52 But as I hung out with David more and more,

02:53:56 as we were both there in the same place,

02:53:59 I started to get,

02:54:01 I started to think about what you could do

02:54:04 to make his robots smarter than they were.

02:54:08 And so we started working together

02:54:10 and for a few years I was chief scientist

02:54:12 and head of software at Hanson Robotics.

02:54:15 Then when I got deeply into the blockchain side of things,

02:54:19 I stepped back from that and cofounded Singularity Net.

02:54:24 David Hanson was also one of the cofounders

02:54:26 of Singularity Net.

02:54:27 So part of our goal there had been

02:54:30 to make the blockchain based like cloud mind platform

02:54:33 for Sophia and the other Hanson robots.

02:54:37 Sophia would be just one of the robots in Singularity Net.

02:54:41 Yeah, yeah, yeah, exactly.

02:54:43 Sophia, many copies of the Sophia robot

02:54:47 would be among the user interfaces

02:54:51 to the globally distributed Singularity Net cloud mind.

02:54:54 And I mean, David and I talked about that

02:54:57 for quite a while before cofounding Singularity Net.

02:55:01 By the way, in his vision and your vision,

02:55:04 was Sophia tightly coupled to a particular AI system

02:55:09 or was the idea that you can plug,

02:55:11 you could just keep plugging in different AI systems

02:55:14 within the head of it?

02:55:15 David’s view was always that Sophia would be a platform,

02:55:22 much like say the Pepper robot is a platform from SoftBank.

02:55:26 Should be a platform with a set of nicely designed APIs

02:55:31 that anyone can use to experiment

02:55:33 with their different AI algorithms on that platform.

02:55:38 And Singularity Net, of course, fits right into that, right?

02:55:41 Because Singularity Net, it’s an API marketplace.

02:55:44 So anyone can put their AI on there.

02:55:46 OpenCog is a little bit different.

02:55:49 I mean, David likes it, but I’d say it’s my thing.

02:55:52 It’s not his.

02:55:52 Like David has a little more passion

02:55:55 for biologically based approaches to AI than I do,

02:55:58 which makes sense.

02:56:00 I mean, he’s really into human physiology and biology.

02:56:02 He’s a character sculptor, right?

02:56:05 So yeah, he’s interested in,

02:56:07 but he also worked a lot with rule based

02:56:09 and logic based AI systems too.

02:56:11 So yeah, he’s interested in not just Sophia,

02:56:14 but all the Hanson robots as a powerful social

02:56:17 and emotional robotics platform.

02:56:21 And what I saw in Sophia was a way to get AI algorithms

02:56:26 was a way to get AI algorithms out there

02:56:32 in front of a whole lot of different people

02:56:34 in an emotionally compelling way.

02:56:36 And part of my thought was really kind of abstract

02:56:39 connected to AGI ethics.

02:56:41 And many people are concerned AGI is gonna enslave everybody

02:56:46 or turn everybody into computronium

02:56:50 to make extra hard drives for their cognitive engine

02:56:54 or whatever.

02:56:55 And emotionally I’m not driven to that sort of paranoia.

02:57:01 I’m really just an optimist by nature,

02:57:04 but intellectually I have to assign a non zero probability

02:57:09 to those sorts of nasty outcomes.

02:57:12 Cause if you’re making something 10 times as smart as you,

02:57:14 how can you know what it’s gonna do?

02:57:16 There’s an irreducible uncertainty there

02:57:19 just as my dog can’t predict what I’m gonna do tomorrow.

02:57:22 So it seemed to me that based on our current state

02:57:26 of knowledge, the best way to bias the AGI as we create

02:57:32 toward benevolence would be to infuse them with love

02:57:38 and compassion the way that we do our own children.

02:57:41 So you want to interact with AIs in the context

02:57:45 of doing compassionate, loving and beneficial things.

02:57:49 And in that way, as your children will learn

02:57:52 by doing compassionate, beneficial,

02:57:53 loving things alongside you.

02:57:55 And that way the AI will learn in practice

02:57:58 what it means to be compassionate, beneficial and loving.

02:58:02 It will get a sort of ingrained intuitive sense of this,

02:58:06 which it can then abstract in its own way

02:58:09 as it gets more and more intelligent.

02:58:11 Now, David saw this the same way.

02:58:12 That’s why he came up with the name Sophia,

02:58:15 which means wisdom.

02:58:18 So it seemed to me making these beautiful, loving robots

02:58:22 to be rolled out for beneficial applications

02:58:26 would be the perfect way to roll out early stage AGI systems

02:58:31 so they can learn from people

02:58:33 and not just learn factual knowledge,

02:58:35 but learn human values and ethics from people

02:58:38 while being their home service robots,

02:58:41 their education assistants, their nursing robots.

02:58:44 So that was the grand vision.

02:58:46 Now, if you’ve ever worked with robots,

02:58:48 the reality is quite different, right?

02:58:50 Like the first principle is the robot is always broken.

02:58:53 I mean, I worked with robots in the 90s a bunch

02:58:57 when you had to solder them together yourself

02:58:59 and I’d put neural nets during reinforcement learning

02:59:02 on like overturned solid ball type robots

02:59:05 and in the 90s when I was a professor.

02:59:09 Things of course advanced a lot, but…

02:59:12 But the principle still holds.

02:59:13 The principle that the robot’s always broken still holds.

02:59:16 Yeah, so faced with the reality of making Sophia do stuff,

02:59:21 many of my robo AGI aspirations were temporarily cast aside.

02:59:26 And I mean, there’s just a practical problem

02:59:30 of making this robot interact in a meaningful way

02:59:33 because like, you put nice computer vision on there,

02:59:36 but there’s always glare.

02:59:38 And then, or you have a dialogue system,

02:59:41 but at the time I was there,

02:59:43 like no speech to text algorithm could deal

02:59:46 with Hong Kongese people’s English accents.

02:59:49 So the speech to text was always bad.

02:59:51 So the robot always sounded stupid

02:59:53 because it wasn’t getting the right text, right?

02:59:55 So I started to view that really

02:59:58 as what in software engineering you call a walking skeleton,

03:00:02 which is maybe the wrong metaphor to use for Sophia

03:00:05 or maybe the right one.

03:00:06 I mean, where the walking skeleton is

03:00:08 in software development is

03:00:10 if you’re building a complex system, how do you get started?

03:00:14 But one way is to first build part one well,

03:00:16 then build part two well, then build part three well

03:00:18 and so on.

03:00:19 And the other way is you make like a simple version

03:00:22 of the whole system and put something in the place

03:00:24 of every part the whole system will need

03:00:27 so that you have a whole system that does something.

03:00:29 And then you work on improving each part

03:00:31 in the context of that whole integrated system.

03:00:34 So that’s what we did on a software level in Sophia.

03:00:38 We made like a walking skeleton software system

03:00:41 where so there’s something that sees,

03:00:43 there’s something that hears, there’s something that moves,

03:00:46 there’s something that remembers,

03:00:48 there’s something that learns.

03:00:49 You put a simple version of each thing in there

03:00:52 and you connect them all together

03:00:54 so that the system will do its thing.

03:00:56 So there’s a lot of AI in there.

03:00:59 There’s not any AGI in there.

03:01:01 I mean, there’s computer vision to recognize people’s faces,

03:01:04 recognize when someone comes in the room and leaves,

03:01:07 trying to recognize whether two people are together or not.

03:01:10 I mean, the dialogue system,

03:01:13 it’s a mix of like hand coded rules with deep neural nets

03:01:18 that come up with their own responses.

03:01:21 And there’s some attempt to have a narrative structure

03:01:25 and sort of try to pull the conversation

03:01:28 into something with a beginning, middle and end

03:01:30 and this sort of story arc.

03:01:32 So it’s…

03:01:33 I mean, like if you look at the Lobner Prize and the systems

03:01:37 that beat the Turing Test currently,

03:01:39 they’re heavily rule based

03:01:40 because like you had said, narrative structure

03:01:43 to create compelling conversations,

03:01:45 you currently, neural networks cannot do that well,

03:01:48 even with Google MENA.

03:01:50 When you actually look at full scale conversations,

03:01:53 it’s just not…

03:01:53 Yeah, this is the thing.

03:01:54 So we’ve been, I’ve actually been running an experiment

03:01:57 the last couple of weeks taking Sophia’s chat bot

03:02:01 and then Facebook’s Transformer chat bot,

03:02:03 which they opened the model.

03:02:05 We’ve had them chatting to each other

03:02:06 for a number of weeks on the server just…

03:02:08 That’s funny.

03:02:10 We’re generating training data of what Sophia says

03:02:13 in a wide variety of conversations.

03:02:15 But we can see, compared to Sophia’s current chat bot,

03:02:20 the Facebook deep neural chat bot comes up

03:02:23 with a wider variety of fluent sounding sentences.

03:02:27 On the other hand, it rambles like mad.

03:02:30 The Sophia chat bot, it’s a little more repetitive

03:02:33 in the sentence structures it uses.

03:02:36 On the other hand, it’s able to keep like a conversation arc

03:02:39 over a much longer, longer period, right?

03:02:42 So there…

03:02:43 Now, you can probably surmount that using Reformer

03:02:46 and like using various other deep neural architectures

03:02:51 to improve the way these Transformer models are trained.

03:02:53 But in the end, neither one of them really understands

03:02:58 what’s going on.

03:02:59 I mean, that’s the challenge I had with Sophia

03:03:02 is if I were doing a robotics project aimed at AGI,

03:03:08 I would wanna make like a robo toddler

03:03:10 that was just learning about what it was seeing.

03:03:11 Because then the language is grounded

03:03:13 in the experience of the robot.

03:03:14 But what Sophia needs to do to be Sophia

03:03:17 is talk about sports or the weather or robotics

03:03:21 or the conference she’s talking at.

03:03:24 She needs to be fluent talking about

03:03:26 any damn thing in the world.

03:03:28 And she doesn’t have grounding for all those things.

03:03:32 So there’s this, just like, I mean, Google Mina

03:03:35 and Facebook’s chat, but I don’t have grounding

03:03:37 for what they’re talking about either.

03:03:40 So in a way, the need to speak fluently about things

03:03:45 where there’s no nonlinguistic grounding

03:03:47 pushes what you can do for Sophia in the short term

03:03:53 a bit away from AGI.

03:03:56 I mean, it pushes you towards IBM Watson situation

03:04:00 where you basically have to do heuristic

03:04:02 and hard code stuff and rule based stuff.

03:04:05 I have to ask you about this, okay.

03:04:07 So because in part Sophia is like an art creation

03:04:18 because it’s beautiful.

03:04:21 She’s beautiful because she inspires

03:04:24 through our human nature of anthropomorphize things.

03:04:29 We immediately see an intelligent being there.

03:04:32 Because David is a great sculptor.

03:04:34 He is a great sculptor, that’s right.

03:04:35 So in fact, if Sophia just had nothing inside her head,

03:04:40 said nothing, if she just sat there,

03:04:43 we already prescribed some intelligence to her.

03:04:45 There’s a long selfie line in front of her

03:04:47 after every talk.

03:04:48 That’s right.

03:04:49 So it captivated the imagination of many people.

03:04:53 I wasn’t gonna say the world,

03:04:54 but yeah, I mean a lot of people.

03:04:58 Billions of people, which is amazing.

03:05:00 It’s amazing, right.

03:05:01 Now, of course, many people have prescribed

03:05:08 essentially AGI type of capabilities to Sophia

03:05:11 when they see her.

03:05:12 And of course, friendly French folk like Yann LeCun

03:05:19 immediately see that of the people from the AI community

03:05:22 and get really frustrated because…

03:05:25 It’s understandable.

03:05:27 So what, and then they criticize people like you

03:05:31 who sit back and don’t say anything about,

03:05:36 like basically allow the imagination of the world,

03:05:39 allow the world to continue being captivated.

03:05:43 So what’s your sense of that kind of annoyance

03:05:49 that the AI community has?

03:05:51 I think there’s several parts to my reaction there.

03:05:55 First of all, if I weren’t involved with Hanson and Box

03:05:59 and didn’t know David Hanson personally,

03:06:03 I probably would have been very annoyed initially

03:06:06 at Sophia as well.

03:06:07 I mean, I can understand the reaction.

03:06:09 I would have been like, wait,

03:06:11 all these stupid people out there think this is an AGI,

03:06:16 but it’s not an AGI, but they’re tricking people

03:06:19 that this very cool robot is an AGI.

03:06:23 And now those of us trying to raise funding to build AGI,

03:06:28 people will think it’s already there and it already works.

03:06:31 So on the other hand, I think,

03:06:36 even if I weren’t directly involved with it,

03:06:38 once I dug a little deeper into David and the robot

03:06:41 and the intentions behind it,

03:06:43 I think I would have stopped being pissed off.

03:06:47 Whereas folks like Yann LeCun have remained pissed off

03:06:51 after their initial reaction.

03:06:54 That’s his thing, that’s his thing.

03:06:56 I think that in particular struck me as somewhat ironic

03:07:01 because Yann LeCun is working for Facebook,

03:07:05 which is using machine learning to program the brains

03:07:09 of the people in the world toward vapid consumerism

03:07:13 and political extremism.

03:07:14 So if your ethics allows you to use machine learning

03:07:19 in such a blatantly destructive way,

03:07:23 why would your ethics not allow you to use machine learning

03:07:26 to make a lovable theatrical robot

03:07:29 that draws some foolish people

03:07:32 into its theatrical illusion?

03:07:34 Like if the pushback had come from Yoshua Bengio,

03:07:38 I would have felt much more humbled by it

03:07:40 because he’s not using AI for blatant evil, right?

03:07:45 On the other hand, he also is a super nice guy

03:07:48 and doesn’t bother to go out there

03:07:50 trashing other people’s work for no good reason, right?

03:07:54 Shots fired, but I get you.

03:07:55 I mean, that’s…

03:07:58 I mean, if you’re gonna ask, I’m gonna answer.

03:08:01 No, for sure.

03:08:02 I think we’ll go back and forth.

03:08:03 I’ll talk to Yann again.

03:08:04 I would add on this though.

03:08:06 I mean, David Hansen is an artist

03:08:11 and he often speaks off the cuff.

03:08:14 And I have not agreed with everything

03:08:16 that David has said or done regarding Sophia.

03:08:19 And David also has not agreed with everything

03:08:22 David has said or done about Sophia.

03:08:24 That’s an important point.

03:08:25 I mean, David is an artistic wild man

03:08:30 and that’s part of his charm.

03:08:33 That’s part of his genius.

03:08:34 So certainly there have been conversations

03:08:39 within Hansen Robotics and between me and David

03:08:42 where I was like, let’s be more open

03:08:45 about how this thing is working.

03:08:48 And I did have some influence in nudging Hansen Robotics

03:08:52 to be more open about how Sophia was working.

03:08:56 And David wasn’t especially opposed to this.

03:09:00 And he was actually quite right about it.

03:09:02 What he said was, you can tell people exactly

03:09:04 how it’s working and they won’t care.

03:09:08 They want to be drawn into the illusion.

03:09:09 And he was 100% correct.

03:09:12 I’ll tell you what, this wasn’t Sophia.

03:09:14 This was Philip K. Dick.

03:09:15 But we did some interactions between humans

03:09:18 and Philip K. Dick robot in Austin, Texas a few years back.

03:09:23 And in this case, the Philip K. Dick was just teleoperated

03:09:26 by another human in the other room.

03:09:28 So during the conversations, we didn’t tell people

03:09:31 the robot was teleoperated.

03:09:32 We just said, here, have a conversation with Phil Dick.

03:09:35 We’re gonna film you, right?

03:09:37 And they had a great conversation with Philip K. Dick

03:09:39 teleoperated by my friend, Stefan Bugaj.

03:09:42 After the conversation, we brought the people

03:09:45 in the back room to see Stefan

03:09:47 who was controlling the Philip K. Dick robot,

03:09:53 but they didn’t believe it.

03:09:54 These people were like, well, yeah,

03:09:56 but I know I was talking to Phil.

03:09:58 Maybe Stefan was typing,

03:10:00 but the spirit of Phil was animating his mind

03:10:03 while he was typing.

03:10:05 So like, even though they knew it was a human in the loop,

03:10:07 even seeing the guy there,

03:10:09 they still believed that was Phil they were talking to.

03:10:12 A small part of me believes that they were right, actually.

03:10:16 Because our understanding…

03:10:17 Well, we don’t understand the universe.

03:10:19 That’s the thing.

03:10:20 I mean, there is a cosmic mind field

03:10:22 that we’re all embedded in

03:10:24 that yields many strange synchronicities in the world,

03:10:28 which is a topic we don’t have time to go into too much here.

03:10:31 Yeah, I mean, there’s something to this

03:10:35 where our imagination about Sophia

03:10:39 and people like Yann LeCun being frustrated about it

03:10:43 is all part of this beautiful dance

03:10:45 of creating artificial intelligence

03:10:47 that’s almost essential.

03:10:48 You see with Boston Dynamics,

03:10:50 whom I’m a huge fan of as well,

03:10:53 you know, the kind of…

03:10:54 I mean, these robots are very far from intelligent.

03:10:58 I played with their last one, actually.

03:11:01 With a spot mini.

03:11:02 Yeah, very cool.

03:11:03 I mean, it reacts quite in a fluid and flexible way.

03:11:07 But we immediately ascribe the kind of intelligence.

03:11:10 We immediately ascribe AGI to them.

03:11:12 Yeah, yeah, if you kick it and it falls down and goes out,

03:11:14 you feel bad, right?

03:11:15 You can’t help it.

03:11:17 And I mean, that’s part of…

03:11:21 That’s gonna be part of our journey

03:11:23 in creating intelligent systems

03:11:24 more and more and more and more.

03:11:25 Like, as Sophia starts out with a walking skeleton,

03:11:29 as you add more and more intelligence,

03:11:31 I mean, we’re gonna have to deal with this kind of idea.

03:11:34 Absolutely.

03:11:35 And about Sophia, I would say,

03:11:37 I mean, first of all, I have nothing against Yann LeCun.

03:11:39 No, no, this is fun.

03:11:40 This is all for fun.

03:11:41 He’s a nice guy.

03:11:42 If he wants to play the media banter game,

03:11:45 I’m happy to play him.

03:11:48 He’s a good researcher and a good human being.

03:11:50 I’d happily work with the guy.

03:11:53 The other thing I was gonna say is,

03:11:56 I have been explicit about how Sophia works

03:12:00 and I’ve posted online and what, H Plus Magazine,

03:12:04 an online webzine.

03:12:06 I mean, I posted a moderately detailed article

03:12:09 explaining like, there are three software systems

03:12:12 we’ve used inside Sophia.

03:12:14 There’s a timeline editor,

03:12:16 which is like a rule based authoring system

03:12:18 where she’s really just being an outlet

03:12:21 for what a human scripted.

03:12:22 There’s a chat bot,

03:12:23 which has some rule based and some neural aspects.

03:12:26 And then sometimes we’ve used OpenCog behind Sophia,

03:12:29 where there’s more learning and reasoning.

03:12:31 And the funny thing is,

03:12:34 I can’t always tell which system is operating here, right?

03:12:37 I mean, whether she’s really learning or thinking,

03:12:41 or just appears to be over a half hour, I could tell,

03:12:44 but over like three or four minutes of interaction,

03:12:47 I could tell.

03:12:48 So even having three systems

03:12:49 that’s already sufficiently complex

03:12:51 where you can’t really tell right away.

03:12:53 Yeah, the thing is, even if you get up on stage

03:12:56 and tell people how Sophia is working,

03:12:59 and then they talk to her,

03:13:01 they still attribute more agency and consciousness to her

03:13:06 than is really there.

03:13:08 So I think there’s a couple of levels of ethical issue there.

03:13:13 One issue is, should you be transparent

03:13:18 about how Sophia is working?

03:13:21 And I think you should,

03:13:22 and I think we have been.

03:13:26 I mean, there’s articles online,

03:13:29 there’s some TV special that goes through me

03:13:32 explaining the three subsystems behind Sophia.

03:13:35 So the way Sophia works

03:13:38 is out there much more clearly

03:13:41 than how Facebook’s AI works or something, right?

03:13:43 I mean, we’ve been fairly explicit about it.

03:13:45 The other is, given that telling people how it works

03:13:50 doesn’t cause them to not attribute

03:13:52 too much intelligence agency to it anyway,

03:13:55 then should you keep fooling them

03:13:58 when they want to be fooled?

03:14:01 And I mean, the whole media industry

03:14:03 is based on fooling people the way they want to be fooled.

03:14:06 And we are fooling people 100% toward a good end.

03:14:11 I mean, we are playing on people’s sense of empathy

03:14:18 and compassion so that we can give them

03:14:20 a good user experience with helpful robots.

03:14:23 And so that we can fill the AI’s mind

03:14:27 with love and compassion.

03:14:29 So I’ve been talking a lot with Hanson Robotics lately

03:14:34 about collaborations in the area of medical robotics.

03:14:37 And we haven’t quite pulled the trigger on a project

03:14:41 in that domain yet, but we may well do so quite soon.

03:14:44 So we’ve been talking a lot about robots

03:14:48 can help with elder care, robots can help with kids.

03:14:51 David’s done a lot of things with autism therapy

03:14:54 and robots before.

03:14:56 In the COVID era, having a robot

03:14:58 that can be a nursing assistant in various senses

03:15:00 can be quite valuable.

03:15:02 The robots don’t spread infection

03:15:04 and they can also deliver more attention

03:15:06 than human nurses can give, right?

03:15:07 So if you have a robot that’s helping a patient

03:15:11 with COVID, if that patient attributes more understanding

03:15:15 and compassion and agency to that robot than it really has

03:15:19 because it looks like a human, I mean, is that really bad?

03:15:22 I mean, we can tell them it doesn’t fully understand you

03:15:25 and they don’t care because they’re lying there

03:15:27 with a fever and they’re sick,

03:15:29 but they’ll react better to that robot

03:15:31 with its loving, warm facial expression

03:15:33 than they would to a pepper robot

03:15:35 or a metallic looking robot.

03:15:38 So it’s really, it’s about how you use it, right?

03:15:41 If you made a human looking like door to door sales robot

03:15:45 that used its human looking appearance

03:15:47 to scam people out of their money,

03:15:49 then you’re using that connection in a bad way,

03:15:53 but you could also use it in a good way.

03:15:57 But then that’s the same problem with every technology.

03:16:01 Beautifully put.

03:16:02 So like you said, we’re living in the era

03:16:07 of the COVID, this is 2020,

03:16:10 one of the craziest years in recent history.

03:16:14 So if we zoom out and look at this pandemic,

03:16:21 the coronavirus pandemic,

03:16:24 maybe let me ask you this kind of thing in viruses in general,

03:16:29 when you look at viruses,

03:16:32 do you see them as a kind of intelligence system?

03:16:35 I think the concept of intelligence is not that natural

03:16:38 of a concept in the end.

03:16:39 I mean, I think human minds and bodies

03:16:43 are a kind of complex self organizing adaptive system.

03:16:49 And viruses certainly are that, right?

03:16:51 They’re a very complex self organizing adaptive system.

03:16:54 If you wanna look at intelligence as Marcus Hutter defines it

03:16:58 as sort of optimizing computable reward functions

03:17:02 over computable environments,

03:17:04 for sure viruses are doing that, right?

03:17:06 And I mean, in doing so they’re causing some harm to us.

03:17:13 So the human immune system is a very complex

03:17:17 of organizing adaptive system,

03:17:19 which has a lot of intelligence to it.

03:17:21 And viruses are also adapting

03:17:23 and dividing into new mutant strains and so forth.

03:17:27 And ultimately the solution is gonna be nanotechnology,

03:17:31 right?

03:17:32 The solution is gonna be making little nanobots that.

03:17:35 Fight the viruses or.

03:17:38 Well, people will use them to make nastier viruses,

03:17:40 but hopefully we can also use them

03:17:42 to just detect combat and kill the viruses.

03:17:46 But I think now we’re stuck

03:17:48 with the biological mechanisms to combat these viruses.

03:17:54 And yeah, we’ve been AGI is not yet mature enough

03:17:59 to use against COVID,

03:18:01 but we’ve been using machine learning

03:18:03 and also some machine reasoning in open cog

03:18:07 to help some doctors to do personalized medicine

03:18:10 against COVID.

03:18:11 So the problem there is given the person’s genomics

03:18:14 and given their clinical medical indicators,

03:18:16 how do you figure out which combination of antivirals

03:18:20 is gonna be most effective against COVID for that person?

03:18:24 And so that’s something

03:18:26 where machine learning is interesting,

03:18:28 but also we’re finding the abstraction

03:18:30 to get an open cog with machine reasoning is interesting

03:18:33 because it can help with transfer learning

03:18:36 when you have not that many different cases to study

03:18:40 and qualitative differences between different strains

03:18:43 of a virus or people of different ages who may have COVID.

03:18:47 So there’s a lot of different disparate data to work with

03:18:50 and it’s small data sets and somehow integrating them.

03:18:53 This is one of the shameful things

03:18:55 that’s very hard to get that data.

03:18:57 So, I mean, we’re working with a couple of groups

03:19:00 doing clinical trials and they’re sharing data with us

03:19:04 like under non disclosure,

03:19:06 but what should be the case is like every COVID

03:19:10 clinical trial should be putting data online somewhere

03:19:14 like suitably encrypted to protect patient privacy

03:19:17 so that anyone with the right AI algorithms

03:19:20 should be able to help analyze it

03:19:22 and any biologists should be able to analyze it by hand

03:19:24 to understand what they can, right?

03:19:25 Instead that data is like siloed inside whatever hospital

03:19:30 is running the clinical trial,

03:19:31 which is completely asinine and ridiculous.

03:19:35 So why the world works that way?

03:19:37 I mean, we could all analyze why,

03:19:39 but it’s insane that it does.

03:19:40 You look at this hydrochloroquine, right?

03:19:44 All these clinical trials were done

03:19:45 were reported by Surgisphere,

03:19:47 some little company no one ever heard of

03:19:50 and everyone paid attention to this.

03:19:53 So they were doing more clinical trials based on that

03:19:55 then they stopped doing clinical trials based on that

03:19:57 then they started again

03:19:58 and why isn’t that data just out there

03:20:01 so everyone can analyze it and see what’s going on, right?

03:20:05 Do you have hope that data will be out there eventually

03:20:10 for future pandemics?

03:20:11 I mean, do you have hope that our society

03:20:13 will move in the direction of?

03:20:15 It’s not in the immediate future

03:20:16 because the US and China frictions are getting very high.

03:20:21 So it’s hard to see US and China

03:20:24 as moving in the direction of openly sharing data

03:20:26 with each other, right?

03:20:27 It’s not, there’s some sharing of data,

03:20:30 but different groups are keeping their data private

03:20:32 till they’ve milked the best results from it

03:20:34 and then they share it, right?

03:20:36 So yeah, we’re working with some data

03:20:39 that we’ve managed to get our hands on,

03:20:41 something we’re doing to do good for the world

03:20:43 and it’s a very cool playground

03:20:44 for like putting deep neural nets and open cog together.

03:20:47 So we have like a bioadden space

03:20:49 full of all sorts of knowledge

03:20:51 from many different biology experiments

03:20:53 about human longevity

03:20:54 and from biology knowledge bases online.

03:20:57 And we can do like graph to vector type embeddings

03:21:00 where we take nodes from the hypergraph,

03:21:03 embed them into vectors,

03:21:04 which can then feed into neural nets

03:21:06 for different types of analysis.

03:21:07 And we were doing this

03:21:09 in the context of a project called Rejuve

03:21:13 that we spun off from SingularityNet

03:21:15 to do longevity analytics,

03:21:18 like understand why people live to 105 years or over

03:21:21 and other people don’t.

03:21:22 And then we had this spin off Singularity Studio

03:21:25 where we’re working with some healthcare companies

03:21:28 on data analytics.

03:21:31 But so there’s bioadden space

03:21:33 that we built for these more commercial

03:21:35 and longevity data analysis purposes.

03:21:38 We’re repurposing and feeding COVID data

03:21:41 into the same bioadden space

03:21:44 and playing around with like graph embeddings

03:21:47 from that graph into neural nets for bioinformatics.

03:21:51 So it’s both being a cool testing ground,

03:21:54 some of our bio AI learning and reasoning.

03:21:57 And it seems we’re able to discover things

03:21:59 that people weren’t seeing otherwise.

03:22:01 Cause the thing in this case is

03:22:03 for each combination of antivirals,

03:22:05 you may have only a few patients

03:22:07 who’ve tried that combination.

03:22:08 And those few patients

03:22:09 may have their particular characteristics.

03:22:11 Like this combination of three

03:22:13 was tried only on people age 80 or over.

03:22:16 This other combination of three,

03:22:18 which has an overlap with the first combination

03:22:20 was tried more on young people.

03:22:22 So how do you combine those different pieces of data?

03:22:25 It’s a very dodgy transfer learning problem,

03:22:28 which is the kind of thing

03:22:29 that the probabilistic reasoning algorithms

03:22:31 we have inside OpenCog are better at

03:22:34 than deep neural networks.

03:22:35 On the other hand, you have gene expression data

03:22:38 where you have 25,000 genes

03:22:39 and the expression level of each gene

03:22:41 in the peripheral blood of each person.

03:22:43 So that sort of data,

03:22:44 either deep neural nets or tools like XGBoost or CatBoost,

03:22:48 these decision forest trees are better at dealing

03:22:50 with than OpenCog.

03:22:52 Cause it’s just these huge,

03:22:53 huge messy floating point vectors

03:22:55 that are annoying for a logic engine to deal with,

03:22:59 but are perfect for a decision forest or a neural net.

03:23:02 So it’s a great playground for like hybrid AI methodology.

03:23:07 And we can have SingularityNet have OpenCog in one agent

03:23:11 and XGBoost in a different agent

03:23:12 and they talk to each other.

03:23:14 But at the same time, it’s highly practical, right?

03:23:18 Cause we’re working with, for example,

03:23:20 some physicians on this project,

03:23:24 physicians in the group called Nth Opinion

03:23:27 based out of Vancouver in Seattle,

03:23:30 who are, these guys are working every day

03:23:32 like in the hospital with patients dying of COVID.

03:23:36 So it’s quite cool to see like neural symbolic AI,

03:23:41 like where the rubber hits the road,

03:23:43 trying to save people’s lives.

03:23:45 I’ve been doing bio AI since 2001,

03:23:48 but mostly human longevity research

03:23:51 and fly longevity research,

03:23:53 try to understand why some organisms really live a long time.

03:23:57 This is the first time like race against the clock

03:24:00 and try to use the AI to figure out stuff that,

03:24:04 like if we take two months longer to solve the AI problem,

03:24:09 some more people will die

03:24:10 because we don’t know what combination

03:24:12 of antivirals to give them.

03:24:14 At the societal level, at the biological level,

03:24:16 at any level, are you hopeful about us

03:24:21 as a human species getting out of this pandemic?

03:24:24 What are your thoughts on it in general?

03:24:26 The pandemic will be gone in a year or two

03:24:28 once there’s a vaccine for it.

03:24:30 So, I mean, that’s…

03:24:32 A lot of pain and suffering can happen in that time.

03:24:35 So that could be irreversible.

03:24:38 I think if you spend much time in Sub Saharan Africa,

03:24:43 you can see there’s a lot of pain and suffering

03:24:45 happening all the time.

03:24:47 Like you walk through the streets

03:24:49 of any large city in Sub Saharan Africa,

03:24:53 and there are loads, I mean, tens of thousands,

03:24:56 probably hundreds of thousands of people

03:24:59 lying by the side of the road,

03:25:01 dying mainly of curable diseases without food or water

03:25:06 and either ostracized by their families

03:25:07 or they left their family house

03:25:09 because they didn’t want to infect their family, right?

03:25:11 I mean, there’s tremendous human suffering

03:25:14 on the planet all the time,

03:25:17 which most folks in the developed world pay no attention to.

03:25:21 And COVID is not remotely the worst.

03:25:25 How many people are dying of malaria all the time?

03:25:27 I mean, so COVID is bad.

03:25:30 It is by no mean the worst thing happening.

03:25:33 And setting aside diseases,

03:25:36 I mean, there are many places in the world

03:25:38 where you’re at risk of having like your teenage son

03:25:41 kidnapped by armed militias and forced to get killed

03:25:44 in someone else’s war, fighting tribe against tribe.

03:25:46 I mean, so humanity has a lot of problems

03:25:50 which we don’t need to have given the state of advancement

03:25:53 of our technology right now.

03:25:56 And I think COVID is one of the easier problems to solve

03:25:59 in the sense that there are many brilliant people

03:26:02 working on vaccines.

03:26:03 We have the technology to create vaccines

03:26:06 and we’re gonna create new vaccines.

03:26:08 We should be more worried

03:26:09 that we haven’t managed to defeat malaria after so long.

03:26:12 And after the Gates Foundation and others

03:26:14 putting so much money into it.

03:26:18 I mean, I think clearly the whole global medical system,

03:26:23 the global health system

03:26:25 and the global political and socioeconomic system

03:26:28 are incredibly unethical and unequal and badly designed.

03:26:33 And I mean, I don’t know how to solve that directly.

03:26:39 I think what we can do indirectly to solve it

03:26:42 is to make systems that operate in parallel

03:26:46 and off to the side of the governments

03:26:49 that are nominally controlling the world

03:26:52 with their armies and militias.

03:26:54 And to the extent that you can make compassionate

03:26:58 peer to peer decentralized frameworks

03:27:01 for doing things,

03:27:03 these are things that can start out unregulated.

03:27:06 And then if they get traction

03:27:07 before the regulators come in,

03:27:09 then they’ve influenced the way the world works, right?

03:27:12 SingularityNet aims to do this with AI.

03:27:16 REJUVE, which is a spinoff from SingularityNet.

03:27:20 You can see REJUVE.io.

03:27:22 How do you spell that?

03:27:23 R E J U V E, REJUVE.io.

03:27:26 That aims to do the same thing for medicine.

03:27:28 So it’s like peer to peer sharing of information

03:27:31 peer to peer sharing of medical data.

03:27:33 So you can share medical data into a secure data wallet.

03:27:36 You can get advice about your health and longevity

03:27:39 through apps that REJUVE.io will launch

03:27:43 within the next couple of months.

03:27:44 And then SingularityNet AI can analyze all this data,

03:27:48 but then the benefits from that analysis

03:27:50 are spread among all the members of the network.

03:27:52 But I mean, of course,

03:27:54 I’m gonna hawk my particular projects,

03:27:56 but I mean, whether or not SingularityNet and REJUVE.io

03:28:00 are the answer, I think it’s key to create

03:28:04 decentralized mechanisms for everything.

03:28:09 I mean, for AI, for human health, for politics,

03:28:13 for jobs and employment, for sharing social information.

03:28:17 And to the extent decentralized peer to peer methods

03:28:21 designed with universal compassion at the core

03:28:25 can gain traction, then these will just decrease the role

03:28:29 that government has.

03:28:31 And I think that’s much more likely to do good

03:28:34 than trying to like explicitly reform

03:28:37 the global government system.

03:28:39 I mean, I’m happy other people are trying to explicitly

03:28:41 reform the global government system.

03:28:43 On the other hand, you look at how much good the internet

03:28:47 or Google did or mobile phones did,

03:28:50 even you’re making something that’s decentralized

03:28:54 and throwing it out everywhere and it takes hold,

03:28:56 then government has to adapt.

03:28:59 And I mean, that’s what we need to do with AI

03:29:01 and with health.

03:29:02 And in that light, I mean, the centralization

03:29:07 of healthcare and of AI is certainly not ideal, right?

03:29:11 Like most AI PhDs are being sucked in by a half dozen

03:29:15 to a dozen big companies.

03:29:17 Most AI processing power is being bought

03:29:20 by a few big companies for their own proprietary good.

03:29:23 And most medical research is within a few

03:29:26 pharmaceutical companies and clinical trials

03:29:29 run by pharmaceutical companies will stay solid

03:29:31 within those pharmaceutical companies.

03:29:34 You know, these large centralized entities,

03:29:37 which are intelligences in themselves, these corporations,

03:29:40 but they’re mostly malevolent psychopathic

03:29:43 and sociopathic intelligences,

03:29:45 not saying the people involved are,

03:29:47 but the corporations as self organizing entities

03:29:50 on their own, which are concerned with maximizing

03:29:53 shareholder value as a sole objective function.

03:29:57 I mean, AI and medicine are being sucked

03:29:59 into these pathological corporate organizations

03:30:04 with government cooperation and Google cooperating

03:30:07 with British and US government on this

03:30:10 as one among many, many different examples.

03:30:12 23andMe providing you the nice service of sequencing

03:30:15 your genome and then licensing the genome

03:30:18 to GlaxoSmithKline on an exclusive basis, right?

03:30:21 Now you can take your own DNA

03:30:23 and do whatever you want with it.

03:30:24 But the pooled collection of 23andMe sequence DNA

03:30:28 is just to GlaxoSmithKline.

03:30:30 Someone else could reach out to everyone

03:30:32 who had worked with 23andMe to sequence their DNA

03:30:36 and say, give us your DNA for our open

03:30:39 and decentralized repository that we’ll make available

03:30:41 to everyone, but nobody’s doing that

03:30:43 cause it’s a pain to get organized.

03:30:45 And the customer list is proprietary to 23andMe, right?

03:30:48 So, yeah, I mean, this I think is a greater risk

03:30:54 to humanity from AI than rogue AGI

03:30:57 is turning the universe into paperclips or computronium.

03:31:01 Cause what you have here is mostly good hearted

03:31:05 and nice people who are sucked into a mode of organization

03:31:09 of large corporations, which has evolved

03:31:12 just for no individual’s fault

03:31:14 just because that’s the way society has evolved.

03:31:16 It’s not altruistic, it’s self interested

03:31:18 and become psychopathic like you said.

03:31:20 The human.

03:31:21 The corporation is psychopathic even if the people are not.

03:31:23 And that’s really the disturbing thing about it

03:31:26 because the corporations can do things

03:31:30 that are quite bad for society

03:31:32 even if nobody has a bad intention.

03:31:35 Right.

03:31:36 And then.

03:31:37 No individual member of that corporation

03:31:38 has a bad intention.

03:31:38 No, some probably do, but it’s not necessary

03:31:41 that they do for the corporation.

03:31:43 Like, I mean, Google, I know a lot of people in Google

03:31:47 and there are, with very few exceptions,

03:31:49 they’re all very nice people

03:31:51 who genuinely want what’s good for the world.

03:31:53 And Facebook, I know fewer people

03:31:56 but it’s probably mostly true.

03:31:59 It’s probably like fine young geeks

03:32:01 who wanna build cool technology.

03:32:03 I actually tend to believe that even the leaders,

03:32:05 even Mark Zuckerberg, one of the most disliked people

03:32:08 in tech is also wants to do good for the world.

03:32:11 I think about Jamie Dimon.

03:32:13 Who’s Jamie Dimon?

03:32:14 Oh, the heads of the great banks

03:32:16 may have a different psychology.

03:32:17 Oh boy, yeah.

03:32:18 Well, I tend to be naive about these things

03:32:22 and see the best in, I tend to agree with you

03:32:27 that I think the individuals wanna do good by the world

03:32:30 but the mechanism of the company

03:32:32 can sometimes be its own intelligence system.

03:32:34 I mean, there’s a, my cousin Mario Goetzler

03:32:38 has worked for Microsoft since 1985 or something

03:32:41 and I can see for him,

03:32:45 I mean, as well as just working on cool projects,

03:32:48 you’re coding stuff that gets used

03:32:51 by like billions and billions of people.

03:32:54 And do you think if I improve this feature

03:32:57 that’s making billions of people’s lives easier, right?

03:33:00 So of course that’s cool.

03:33:03 And the engineers are not in charge

03:33:05 of running the company anyway.

03:33:06 And of course, even if you’re Mark Zuckerberg or Larry Page,

03:33:10 I mean, you still have a fiduciary responsibility.

03:33:13 And I mean, you’re responsible to the shareholders,

03:33:16 your employees who you want to keep paying them

03:33:18 and so forth.

03:33:19 So yeah, you’re enmeshed in this system.

03:33:22 And when I worked in DC,

03:33:26 I worked a bunch with INSCOM, US Army Intelligence

03:33:29 and I was heavily politically opposed

03:33:31 to what the US Army was doing in Iraq at that time,

03:33:34 like torturing people in Abu Ghraib

03:33:36 but everyone I knew in US Army and INSCOM,

03:33:39 when I hung out with them, was very nice person.

03:33:42 They were friendly to me.

03:33:43 They were nice to my kids and my dogs, right?

03:33:46 And they really believed that the US

03:33:48 was fighting the forces of evil.

03:33:49 And if you ask me about Abu Ghraib, they’re like,

03:33:51 well, but these Arabs will chop us into pieces.

03:33:54 So how can you say we’re wrong

03:33:56 to waterboard them a bit, right?

03:33:58 Like that’s much less than what they would do to us.

03:34:00 It’s just in their worldview,

03:34:02 what they were doing was really genuinely

03:34:05 for the good of humanity.

03:34:06 Like none of them woke up in the morning

03:34:09 and said like, I want to do harm to good people

03:34:12 because I’m just a nasty guy, right?

03:34:14 So yeah, most people on the planet,

03:34:18 setting aside a few genuine psychopaths and sociopaths,

03:34:21 I mean, most people on the planet have a heavy dose

03:34:25 of benevolence and wanting to do good

03:34:27 and also a heavy capability to convince themselves

03:34:32 whatever they feel like doing

03:34:33 or whatever is best for them is for the good of humankind.

03:34:37 So the more we can decentralize control.

03:34:40 Decentralization, you know, the democracy is horrible,

03:34:44 but this is like Winston Churchill said,

03:34:47 you know, it’s the worst possible system of government

03:34:49 except for all the others, right?

03:34:50 I mean, I think the whole mess of humanity

03:34:53 has many, many very bad aspects to it,

03:34:56 but so far the track record of elite groups

03:35:00 who know what’s better for all of humanity

03:35:02 is much worse than the track record

03:35:04 of the whole teaming democratic participatory

03:35:08 mess of humanity, right?

03:35:09 I mean, none of them is perfect by any means.

03:35:13 The issue with a small elite group that knows what’s best

03:35:16 is even if it starts out as truly benevolent

03:35:20 and doing good things in accordance

03:35:22 with its initial good intentions,

03:35:24 you find out you need more resources,

03:35:26 you need a bigger organization, you pull in more people,

03:35:29 internal politics arises, difference of opinions arise

03:35:32 and bribery happens, like some opponent organization

03:35:38 takes a second in command now to make some,

03:35:40 the first in command of some other organization.

03:35:42 And I mean, that’s, there’s a lot of history

03:35:45 of what happens with elite groups

03:35:47 thinking they know what’s best for the human race.

03:35:50 So yeah, if I have to choose,

03:35:53 I’m gonna reluctantly put my faith

03:35:55 in the vast democratic decentralized mass.

03:35:58 And I think corporations have a track record

03:36:02 of being ethically worse

03:36:05 than their constituent human parts.

03:36:07 And democratic governments have a more mixed track record,

03:36:13 but there are at least.

03:36:14 That’s the best we got.

03:36:15 Yeah, I mean, you can, there’s Iceland,

03:36:18 very nice country, right?

03:36:19 I’ve been very democratic for 800 plus years,

03:36:23 very, very benevolent, beneficial government.

03:36:26 And I think, yeah, there are track records

03:36:28 of democratic modes of organization.

03:36:31 Linux, for example, some of the people in charge of Linux

03:36:36 are overtly complete assholes, right?

03:36:38 And trying to reform themselves in many cases,

03:36:41 in other cases not, but the organization as a whole,

03:36:45 I think it’s done a good job overall.

03:36:49 It’s been very welcoming in the third world, for example,

03:36:53 and it’s allowed advanced technology to roll out

03:36:56 on all sorts of different embedded devices and platforms

03:36:59 in places where people couldn’t afford to pay

03:37:02 for proprietary software.

03:37:03 So I’d say the internet, Linux, and many democratic nations

03:37:09 are examples of how sort of an open,

03:37:11 decentralized democratic methodology

03:37:14 can be ethically better than the sum of the parts

03:37:16 rather than worse.

03:37:17 And corporations, that has happened only for a brief period

03:37:21 and then it goes sour, right?

03:37:24 I mean, I’d say a similar thing about universities.

03:37:26 Like university is a horrible way to organize research

03:37:30 and get things done, yet it’s better than anything else

03:37:33 we’ve come up with, right?

03:37:34 A company can be much better,

03:37:36 but for a brief period of time,

03:37:38 and then it stops being so good, right?

03:37:42 So then I think if you believe that AGI

03:37:47 is gonna emerge sort of incrementally

03:37:50 out of AIs doing practical stuff in the world,

03:37:53 like controlling humanoid robots or driving cars

03:37:57 or diagnosing diseases or operating killer drones

03:38:01 or spying on people and reporting under the government,

03:38:04 then what kind of organization creates more and more

03:38:09 advanced narrow AI verging toward AGI

03:38:12 may be quite important because it will guide

03:38:14 like what’s in the mind of the early stage AGI

03:38:18 as it first gains the ability to rewrite its own code base

03:38:21 and project itself toward super intelligence.

03:38:24 And if you believe that AI may move toward AGI

03:38:31 out of this sort of synergetic activity

03:38:33 of many agents cooperating together

03:38:35 rather than just have one person’s project,

03:38:37 then who owns and controls that platform for AI cooperation

03:38:42 becomes also very, very important, right?

03:38:47 And is that platform AWS?

03:38:49 Is it Google Cloud?

03:38:50 Is it Alibaba or is it something more like the internet

03:38:53 or Singularity Net, which is open and decentralized?

03:38:56 So if all of my weird machinations come to pass, right?

03:39:01 I mean, we have the Hanson robots

03:39:03 being a beautiful user interface,

03:39:06 gathering information on human values

03:39:09 and being loving and compassionate to people in medical,

03:39:12 home service, robot office applications,

03:39:14 you have Singularity Net in the backend

03:39:16 networking together many different AIs

03:39:19 toward cooperative intelligence,

03:39:21 fueling the robots among many other things.

03:39:24 You have OpenCog 2.0 and true AGI

03:39:27 as one of the sources of AI

03:39:29 inside this decentralized network,

03:39:31 powering the robot and medical AIs

03:39:34 helping us live a long time

03:39:36 and cure diseases among other things.

03:39:39 And this whole thing is operating

03:39:42 in a democratic and decentralized way, right?

03:39:46 And I think if anyone can pull something like this off,

03:39:50 whether using the specific technologies I’ve mentioned

03:39:53 or something else, I mean,

03:39:55 then I think we have a higher odds

03:39:58 of moving toward a beneficial technological singularity

03:40:02 rather than one in which the first super AGI

03:40:06 is indifferent to humans

03:40:07 and just considers us an inefficient use of molecules.

03:40:11 That was a beautifully articulated vision for the world.

03:40:15 So thank you for that.

03:40:16 Well, let’s talk a little bit about life and death.

03:40:21 I’m pro life and anti death for most people.

03:40:27 There’s few exceptions that I won’t mention here.

03:40:30 I’m glad just like your dad,

03:40:32 you’re taking a stand against death.

03:40:36 You have, by the way, you have a bunch of awesome music

03:40:39 where you play piano online.

03:40:41 One of the songs that I believe you’ve written

03:40:45 the lyrics go, by the way, I like the way it sounds,

03:40:49 people should listen to it, it’s awesome.

03:40:51 I considered, I probably will cover it, it’s a good song.

03:40:54 Tell me why do you think it is a good thing

03:40:58 that we all get old and die is one of the songs.

03:41:01 I love the way it sounds,

03:41:03 but let me ask you about death first.

03:41:06 Do you think there’s an element to death

03:41:08 that’s essential to give our life meaning?

03:41:12 Like the fact that this thing ends.

03:41:14 Well, let me say I’m pleased and a little embarrassed

03:41:19 you’ve been listening to that music I put online.

03:41:21 That’s awesome.

03:41:22 One of my regrets in life recently is I would love

03:41:25 to get time to really produce music well.

03:41:28 Like I haven’t touched my sequencer software

03:41:31 in like five years.

03:41:32 I would love to like rehearse and produce and edit.

03:41:37 But with a two year old baby

03:41:39 and trying to create the singularity, there’s no time.

03:41:42 So I just made the decision to,

03:41:45 when I’m playing random shit in an off moment.

03:41:47 Just record it.

03:41:48 Just record it, put it out there, like whatever.

03:41:51 Maybe if I’m unfortunate enough to die,

03:41:54 maybe that can be input to the AGI

03:41:56 when it tries to make an accurate mind upload of me, right?

03:41:58 Death is bad.

03:42:01 I mean, that’s very simple.

03:42:02 It’s baffling we should have to say that.

03:42:04 I mean, of course people can make meaning out of death.

03:42:08 And if someone is tortured,

03:42:10 maybe they can make beautiful meaning out of that torture

03:42:13 and write a beautiful poem

03:42:14 about what it was like to be tortured, right?

03:42:16 I mean, we’re very creative.

03:42:19 We can milk beauty and positivity

03:42:22 out of even the most horrible and shitty things.

03:42:25 But just because if I was tortured,

03:42:27 I could write a good song

03:42:28 about what it was like to be tortured,

03:42:30 doesn’t make torture good.

03:42:31 And just because people are able to derive meaning

03:42:35 and value from death,

03:42:37 doesn’t mean they wouldn’t derive even better meaning

03:42:39 and value from ongoing life without death,

03:42:42 which I very…

03:42:43 Indefinite.

03:42:44 Yeah, yeah.

03:42:45 So if you could live forever, would you live forever?

03:42:47 Forever.

03:42:50 My goal with longevity research

03:42:52 is to abolish the plague of involuntary death.

03:42:57 I don’t think people should die unless they choose to die.

03:43:01 If I had to choose forced immortality

03:43:05 versus dying, I would choose forced immortality.

03:43:09 On the other hand, if I chose…

03:43:11 If I had the choice of immortality

03:43:13 with the choice of suicide whenever I felt like it,

03:43:15 of course I would take that instead.

03:43:17 And that’s the more realistic choice.

03:43:18 I mean, there’s no reason

03:43:20 you should have forced immortality.

03:43:21 You should be able to live until you get sick of living,

03:43:25 right?

03:43:26 I mean, that’s…

03:43:27 And that will seem insanely obvious

03:43:29 to everyone 50 years from now.

03:43:31 And they will be so…

03:43:33 I mean, people who thought death gives meaning to life,

03:43:35 so we should all die,

03:43:37 they will look at that 50 years from now

03:43:39 the way we now look at the Anabaptists in the year 1000

03:43:43 who gave away all their positions,

03:43:45 went on top of the mountain for Jesus

03:43:47 to come and bring them to the ascension.

03:43:50 I mean, it’s ridiculous that people think death is good

03:43:55 because you gain more wisdom as you approach dying.

03:44:00 I mean, of course it’s true.

03:44:01 I mean, I’m 53.

03:44:03 And the fact that I might have only a few more decades left,

03:44:08 it does make me reflect on things differently.

03:44:11 It does give me a deeper understanding of many things.

03:44:15 But I mean, so what?

03:44:18 You could get a deep understanding

03:44:19 in a lot of different ways.

03:44:20 Pain is the same way.

03:44:22 We’re gonna abolish pain.

03:44:24 And that’s even more amazing than abolishing death, right?

03:44:27 I mean, once we get a little better at neuroscience,

03:44:30 we’ll be able to go in and adjust the brain

03:44:32 so that pain doesn’t hurt anymore, right?

03:44:34 And that, you know, people will say that’s bad

03:44:37 because there’s so much beauty

03:44:39 in overcoming pain and suffering.

03:44:41 Oh, sure.

03:44:42 And there’s beauty in overcoming torture too.

03:44:45 And some people like to cut themselves,

03:44:46 but not many, right?

03:44:48 I mean.

03:44:48 That’s an interesting.

03:44:49 So, but to push, I mean, to push back again,

03:44:52 this is the Russian side of me.

03:44:53 I do romanticize suffering.

03:44:55 It’s not obvious.

03:44:56 I mean, the way you put it, it seems very logical.

03:44:59 It’s almost absurd to romanticize suffering or pain

03:45:02 or death, but to me, a world without suffering,

03:45:07 without pain, without death, it’s not obvious.

03:45:10 Well, then you can stay in the people’s zoo,

03:45:13 people torturing each other.

03:45:15 No, but what I’m saying is I don’t,

03:45:18 well, that’s, I guess what I’m trying to say,

03:45:20 I don’t know if I was presented with that choice,

03:45:22 what I would choose because it, to me.

03:45:25 This is a subtler, it’s a subtler matter.

03:45:30 And I’ve posed it in this conversation

03:45:33 in an unnecessarily extreme way.

03:45:37 So I think, I think the way you should think about it

03:45:41 is what if there’s a little dial on the side of your head

03:45:44 and you could turn how much pain hurt,

03:45:48 turn it down to zero, turn it up to 11,

03:45:50 like in spinal tap, if it wants,

03:45:52 maybe through an actual spinal tap, right?

03:45:53 So, I mean, would you opt to have that dial there or not?

03:45:58 That’s the question.

03:45:59 The question isn’t whether you would turn the pain down

03:46:02 to zero all the time.

03:46:05 Would you opt to have the dial or not?

03:46:07 My guess is that in some dark moment of your life,

03:46:10 you would choose to have the dial implanted

03:46:12 and then it would be there.

03:46:13 Just to confess a small thing, don’t ask me why,

03:46:17 but I’m doing this physical challenge currently

03:46:20 where I’m doing 680 pushups and pull ups a day.

03:46:25 And my shoulder is currently, as we sit here,

03:46:29 in a lot of pain.

03:46:30 And I don’t know, I would certainly right now,

03:46:35 if you gave me a dial, I would turn that sucker to zero

03:46:38 as quickly as possible.

03:46:40 But I think the whole point of this journey is,

03:46:46 I don’t know.

03:46:47 Well, because you’re a twisted human being.

03:46:49 I’m a twisted, so the question is am I somehow twisted

03:46:53 because I created some kind of narrative for myself

03:46:57 so that I can deal with the injustice

03:47:00 and the suffering in the world?

03:47:03 Or is this actually going to be a source of happiness

03:47:06 for me?

03:47:07 Well, this is to an extent is a research question

03:47:10 that humanity will undertake, right?

03:47:12 So I mean, human beings do have a particular biological

03:47:17 makeup, which sort of implies a certain probability

03:47:22 distribution over motivational systems, right?

03:47:25 So I mean, we, and that is there, that is there.

03:47:30 Now the question is how flexibly can that morph

03:47:36 as society and technology change, right?

03:47:38 So if we’re given that dial and we’re given a society

03:47:43 in which say we don’t have to work for a living

03:47:47 and in which there’s an ambient decentralized

03:47:50 benevolent AI network that will warn us

03:47:52 when we’re about to hurt ourself,

03:47:54 if we’re in a different context,

03:47:57 can we consistently with being genuinely and fully human,

03:48:02 can we consistently get into a state of consciousness

03:48:05 where we just want to keep the pain dial turned

03:48:09 all the way down and yet we’re leading very rewarding

03:48:12 and fulfilling lives, right?

03:48:13 Now, I suspect the answer is yes, we can do that,

03:48:17 but I don’t know that, I don’t know that for certain.

03:48:21 Yeah, now I’m more confident that we could create

03:48:25 a nonhuman AGI system, which just didn’t need an analog

03:48:31 of feeling pain.

03:48:33 And I think that AGI system will be fundamentally healthier

03:48:37 and more benevolent than human beings.

03:48:39 So I think it might or might not be true

03:48:42 that humans need a certain element of suffering

03:48:45 to be satisfied humans, consistent with the human physiology.

03:48:49 If it is true, that’s one of the things that makes us fucked

03:48:53 and disqualified to be the super AGI, right?

03:48:58 I mean, the nature of the human motivational system

03:49:03 is that we seem to gravitate towards situations

03:49:08 where the best thing in the large scale

03:49:12 is not the best thing in the small scale

03:49:15 according to our subjective value system.

03:49:18 So we gravitate towards subjective value judgments

03:49:20 where to gratify ourselves in the large,

03:49:22 we have to ungratify ourselves in the small.

03:49:25 And we do that in, you see that in music,

03:49:29 there’s a theory of music which says

03:49:31 the key to musical aesthetics

03:49:33 is the surprising fulfillment of expectations.

03:49:36 Like you want something that will fulfill

03:49:38 the expectations are listed in the prior part of the music,

03:49:41 but in a way with a bit of a twist that surprises you.

03:49:44 And I mean, that’s true not only in outdoor music

03:49:48 like my own or that of Zappa or Steve Vai or Buckethead

03:49:53 or Christoph Pendergast or something,

03:49:55 it’s even there in Mozart or something.

03:49:57 It’s not there in elevator music too much,

03:49:59 but that’s why it’s boring, right?

03:50:02 But wrapped up in there is we want to hurt a little bit

03:50:07 so that we can feel the pain go away.

03:50:11 Like we wanna be a little confused by what’s coming next.

03:50:15 So then when the thing that comes next actually makes sense,

03:50:18 it’s so satisfying, right?

03:50:19 That’s the surprising fulfillment of expectations,

03:50:22 is that what you said?

03:50:23 Yeah, yeah, yeah.

03:50:23 So beautifully put.

03:50:24 We’ve been skirting around a little bit,

03:50:26 but if I were to ask you the most ridiculous big question

03:50:29 of what is the meaning of life,

03:50:32 what would your answer be?

03:50:37 Three values, joy, growth, and choice.

03:50:43 I think you need joy.

03:50:46 I mean, that’s the basis of everything.

03:50:48 If you want the number one value.

03:50:49 On the other hand, I’m unsatisfied with a static joy

03:50:54 that doesn’t progress perhaps because of some

03:50:58 elemental element of human perversity,

03:51:00 but the idea of something that grows

03:51:02 and becomes more and more and better and better

03:51:04 in some sense appeals to me.

03:51:06 But I also sort of like the idea of individuality

03:51:10 that as a distinct system, I have some agency.

03:51:14 So there’s some nexus of causality within this system

03:51:18 rather than the causality being wholly evenly distributed

03:51:22 over the joyous growing mass.

03:51:23 So you start with joy, growth, and choice

03:51:27 as three basic values.

03:51:28 Those three things could continue indefinitely.

03:51:31 That’s something that can last forever.

03:51:35 Is there some aspect of something you called,

03:51:38 which I like, super longevity that you find exciting?

03:51:44 Is there research wise, is there ideas in that space that?

03:51:48 I mean, I think, yeah, in terms of the meaning of life,

03:51:53 this really ties into that because for us as humans,

03:51:58 probably the way to get the most joy, growth, and choice

03:52:02 is transhumanism and to go beyond the human form

03:52:06 that we have right now, right?

03:52:08 I mean, I think human body is great

03:52:10 and by no means do any of us maximize the potential

03:52:15 for joy, growth, and choice imminent in our human bodies.

03:52:18 On the other hand, it’s clear that other configurations

03:52:21 of matter could manifest even greater amounts

03:52:25 of joy, growth, and choice than humans do,

03:52:29 maybe even finding ways to go beyond the realm of matter

03:52:33 as we understand it right now.

03:52:34 So I think in a practical sense,

03:52:38 much of the meaning I see in human life

03:52:40 is to create something better than humans

03:52:42 and go beyond human life.

03:52:45 But certainly that’s not all of it for me

03:52:47 in a practical sense, right?

03:52:49 Like I have four kids and a granddaughter

03:52:51 and many friends and parents and family

03:52:55 and just enjoying everyday human social existence.

03:52:59 But we can do even better.

03:53:00 Yeah, yeah.

03:53:01 And I mean, I love, I’ve always,

03:53:03 when I could live near nature,

03:53:05 I spend a bunch of time out in nature in the forest

03:53:08 and on the water every day and so forth.

03:53:10 So, I mean, enjoying the pleasant moment is part of it,

03:53:15 but the growth and choice aspect are severely limited

03:53:20 by our human biology.

03:53:22 In particular, dying seems to inhibit your potential

03:53:25 for personal growth considerably as far as we know.

03:53:29 I mean, there’s some element of life after death perhaps,

03:53:32 but even if there is,

03:53:34 why not also continue going in this biological realm, right?

03:53:39 In super longevity, I mean,

03:53:43 you know, we haven’t yet cured aging.

03:53:45 We haven’t yet cured death.

03:53:48 Certainly there’s very interesting progress all around.

03:53:51 I mean, CRISPR and gene editing can be an incredible tool.

03:53:57 And I mean, right now,

03:54:00 stem cells could potentially prolong life a lot.

03:54:03 Like if you got stem cell injections

03:54:05 of just stem cells for every tissue of your body

03:54:09 injected into every tissue,

03:54:11 and you can just have replacement of your old cells

03:54:15 with new cells produced by those stem cells,

03:54:17 I mean, that could be highly impactful at prolonging life.

03:54:21 Now we just need slightly better technology

03:54:23 for having them grow, right?

03:54:25 So using machine learning to guide procedures

03:54:28 for stem cell differentiation and trans differentiation,

03:54:32 it’s kind of nitty gritty,

03:54:33 but I mean, that’s quite interesting.

03:54:36 So I think there’s a lot of different things being done

03:54:41 to help with prolongation of human life,

03:54:44 but we could do a lot better.

03:54:47 So for example, the extracellular matrix,

03:54:51 which is the bunch of proteins

03:54:52 in between the cells in your body,

03:54:54 they get stiffer and stiffer as you get older.

03:54:57 And the extracellular matrix transmits information

03:55:01 both electrically, mechanically,

03:55:03 and to some extent, biophotonically.

03:55:05 So there’s all this transmission

03:55:07 through the parts of the body,

03:55:08 but the stiffer the extracellular matrix gets,

03:55:11 the less the transmission happens,

03:55:13 which makes your body get worse coordinated

03:55:15 between the different organs as you get older.

03:55:17 So my friend Christian Schaffmeister

03:55:19 at my alumnus organization,

03:55:22 my Alma mater, the Great Temple University,

03:55:25 Christian Schaffmeister has a potential solution to this,

03:55:28 where he has these novel molecules called spiral ligamers,

03:55:32 which are like polymers that are not organic.

03:55:34 They’re specially designed polymers

03:55:37 so that you can algorithmically predict

03:55:39 exactly how they’ll fold very simply.

03:55:41 So he designed the molecular scissors

03:55:43 that have spiral ligamers that you could eat

03:55:45 and would then cut through all the glucosamine

03:55:49 and other crosslink proteins

03:55:50 in your extracellular matrix, right?

03:55:52 But to make that technology really work

03:55:55 and be mature as several years of work,

03:55:56 as far as I know, no one’s finding it at the moment.

03:56:00 So there’s so many different ways

03:56:02 that technology could be used to prolong longevity.

03:56:05 What we really need,

03:56:06 we need an integrated database of all biological knowledge

03:56:09 about human beings and model organisms,

03:56:12 like hopefully a massively distributed

03:56:14 open cog bioatom space,

03:56:15 but it can exist in other forms too.

03:56:18 We need that data to be opened up

03:56:20 in a suitably privacy protecting way.

03:56:23 We need massive funding into machine learning,

03:56:26 AGI, proto AGI statistical research

03:56:29 aimed at solving biology,

03:56:31 both molecular biology and human biology

03:56:33 based on this massive data set, right?

03:56:36 And then we need regulators not to stop people

03:56:40 from trying radical therapies on themselves

03:56:43 if they so wish to,

03:56:46 as well as better cloud based platforms

03:56:49 for like automated experimentation on microorganisms,

03:56:52 flies and mice and so forth.

03:56:54 And we could do all this.

03:56:55 You look after the last financial crisis,

03:56:58 Obama, who I generally like pretty well,

03:57:01 but he gave $4 trillion to large banks

03:57:03 and insurance companies.

03:57:05 You know, now in this COVID crisis,

03:57:08 trillions are being spent to help everyday people

03:57:10 and small businesses.

03:57:12 In the end, we’ll probably will find many more trillions

03:57:14 are being given to large banks and insurance companies.

03:57:17 Anyway, like could the world put $10 trillion

03:57:21 into making a massive holistic bio AI and bio simulation

03:57:25 and experimental biology infrastructure?

03:57:27 We could, we could put $10 trillion into that

03:57:30 without even screwing us up too badly.

03:57:32 Just as in the end COVID and the last financial crisis

03:57:35 won’t screw up the world economy so badly.

03:57:37 We’re not putting $10 trillion into that.

03:57:39 Instead, all this research is siloed inside

03:57:43 a few big companies and government agencies.

03:57:46 And most of the data that comes from our individual bodies

03:57:51 personally, that could feed this AI to solve aging

03:57:54 and death, most of that data is sitting

03:57:56 in some hospital’s database doing nothing, right?

03:58:03 I got two more quick questions for you.

03:58:07 One, I know a lot of people are gonna ask me,

03:58:09 you are on the Joe Rogan podcast

03:58:11 wearing that same amazing hat.

03:58:14 Do you have a origin story for the hat?

03:58:17 Does the hat have its own story that you’re able to share?

03:58:21 The hat story has not been told yet.

03:58:23 So we’re gonna have to come back

03:58:24 and you can interview the hat.

03:58:27 We’ll leave that for the hat’s own interview.

03:58:30 All right.

03:58:30 It’s too much to pack into.

03:58:32 Is there a book?

03:58:32 Is the hat gonna write a book?

03:58:34 Okay.

03:58:35 Well, it may transmit the information

03:58:38 through direct neural transmission.

03:58:40 Okay, so it’s actually,

03:58:41 there might be some Neuralink competition there.

03:58:44 Beautiful, we’ll leave it as a mystery.

03:58:46 Maybe one last question.

03:58:49 If you build an AGI system,

03:58:54 you’re successful at building the AGI system

03:58:58 that could lead us to the singularity

03:59:00 and you get to talk to her and ask her one question,

03:59:04 what would that question be?

03:59:05 We’re not allowed to ask,

03:59:08 what is the question I should be asking?

03:59:10 Yeah, that would be cheating,

03:59:12 but I guess that’s a good question.

03:59:14 I’m thinking of a,

03:59:15 I wrote a story with Stefan Bugay once

03:59:18 where these AI developers,

03:59:23 they created a super smart AI

03:59:25 aimed at answering all the philosophical questions

03:59:31 that have been worrying them.

03:59:32 Like what is the meaning of life?

03:59:34 Is there free will?

03:59:35 What is consciousness and so forth?

03:59:37 So they got the super AGI built

03:59:40 and it turned a while.

03:59:43 It said, those are really stupid questions.

03:59:46 And then it puts off on a spaceship and left the earth.

03:59:51 So you’d be afraid of scaring it off.

03:59:55 That’s it, yeah.

03:59:56 I mean, honestly, there is no one question

04:00:01 that rises among all the others, really.

04:00:08 I mean, what interests me more

04:00:10 is upgrading my own intelligence

04:00:13 so that I can absorb the whole world view of the super AGI.

04:00:19 But I mean, of course, if the answer could be like,

04:00:23 what is the chemical formula for the immortality pill?

04:00:27 Like then I would do that or emit a bit string,

04:00:33 which will be the code for a super AGI

04:00:38 on the Intel i7 processor.

04:00:41 So those would be good questions.

04:00:42 So if your own mind was expanded

04:00:46 to become super intelligent, like you’re describing,

04:00:49 I mean, there’s kind of a notion

04:00:53 that intelligence is a burden, that it’s possible

04:00:57 that with greater and greater intelligence,

04:01:00 that other metric of joy that you mentioned

04:01:03 becomes more and more difficult.

04:01:04 What’s your sense?

04:01:05 Pretty stupid idea.

04:01:08 So you think if you’re super intelligent,

04:01:09 you can also be super joyful?

04:01:11 I think getting root access to your own brain

04:01:15 will enable new forms of joy that we don’t have now.

04:01:19 And I think as I’ve said before,

04:01:22 what I aim at is really make multiple versions of myself.

04:01:27 So I would like to keep one version,

04:01:30 which is basically human like I am now,

04:01:33 but keep the dial to turn pain up and down

04:01:36 and get rid of death, right?

04:01:38 And make another version which fuses its mind

04:01:43 with superhuman AGI,

04:01:46 and then will become massively transhuman.

04:01:50 And whether it will send some messages back

04:01:52 to the human me or not will be interesting to find out.

04:01:55 The thing is, once you’re a super AGI,

04:01:58 like one subjective second to a human

04:02:01 might be like a million subjective years

04:02:03 to that super AGI, right?

04:02:04 So it would be on a whole different basis.

04:02:07 I mean, at very least those two copies will be good to have,

04:02:10 but it could be interesting to put your mind

04:02:13 into a dolphin or a space amoeba

04:02:16 or all sorts of other things.

04:02:18 You can imagine one version that doubled its intelligence

04:02:21 every year and another version that just became

04:02:24 a super AGI as fast as possible, right?

04:02:26 So, I mean, now we’re sort of constrained to think

04:02:29 one mind, one self, one body, right?

04:02:33 But I think we actually, we don’t need to be that

04:02:36 constrained in thinking about future intelligence

04:02:40 after we’ve mastered AGI and nanotechnology

04:02:44 and longevity biology.

04:02:47 I mean, then each of our minds

04:02:49 is a certain pattern of organization, right?

04:02:52 And I know we haven’t talked about consciousness,

04:02:54 but I sort of, I’m panpsychist.

04:02:56 I sort of view the universe as conscious.

04:03:00 And so, you know, a light bulb or a quark

04:03:03 or an ant or a worm or a monkey

04:03:06 have their own manifestations of consciousness.

04:03:08 And the human manifestation of consciousness,

04:03:11 it’s partly tied to the particular meat

04:03:15 that we’re manifested by, but it’s largely tied

04:03:19 to the pattern of organization in the brain, right?

04:03:22 So, if you upload yourself into a computer

04:03:25 or a robot or whatever else it is,

04:03:28 some element of your human consciousness may not be there

04:03:31 because it’s just tied to the biological embodiment.

04:03:34 But I think most of it will be there.

04:03:36 And these will be incarnations of your consciousness

04:03:40 in a slightly different flavor.

04:03:42 And, you know, creating these different versions

04:03:45 will be amazing, and each of them will discover

04:03:48 meanings of life that have some overlap,

04:03:52 but probably not total overlap

04:03:54 with the human Ben’s meaning of life.

04:03:59 The thing is, to get to that future

04:04:02 where we can explore different varieties of joy,

04:04:06 different variations of human experience and values

04:04:09 and transhuman experiences and values to get to that future,

04:04:13 we need to navigate through a whole lot of human bullshit

04:04:16 of companies and governments and killer drones

04:04:21 and making and losing money and so forth, right?

04:04:25 And that’s the challenge we’re facing now

04:04:28 is if we do things right,

04:04:30 we can get to a benevolent singularity,

04:04:33 which is levels of joy, growth, and choice

04:04:36 that are literally unimaginable to human beings.

04:04:39 If we do things wrong,

04:04:41 we could either annihilate all life on the planet,

04:04:44 or we could lead to a scenario where, say,

04:04:47 all humans are annihilated and there’s some super AGI

04:04:52 that goes on and does its own thing unrelated to us

04:04:55 except via our role in originating it.

04:04:58 And we may well be at a bifurcation point now, right?

04:05:02 Where what we do now has significant causal impact

04:05:05 on what comes about,

04:05:06 and yet most people on the planet

04:05:09 aren’t thinking that way whatsoever,

04:05:11 they’re thinking only about their own narrow aims

04:05:16 and aims and goals, right?

04:05:17 Now, of course, I’m thinking about my own narrow aims

04:05:20 and goals to some extent also,

04:05:24 but I’m trying to use as much of my energy and mind as I can

04:05:29 to push toward this more benevolent alternative,

04:05:33 which will be better for me,

04:05:34 but also for everybody else.

04:05:37 And it’s weird that so few people understand

04:05:42 what’s going on.

04:05:43 I know you interviewed Elon Musk,

04:05:44 and he understands a lot of what’s going on,

04:05:47 but he’s much more paranoid than I am, right?

04:05:49 Because Elon gets that AGI

04:05:52 is gonna be way, way smarter than people,

04:05:54 and he gets that an AGI does not necessarily

04:05:57 have to give a shit about people

04:05:58 because we’re a very elementary mode of organization

04:06:01 of matter compared to many AGI’s.

04:06:04 But I don’t think he has a clear vision

04:06:06 of how infusing early stage AGI’s

04:06:10 with compassion and human warmth

04:06:13 can lead to an AGI that loves and helps people

04:06:18 rather than viewing us as a historical artifact

04:06:22 and a waste of mass energy.

04:06:26 But on the other hand,

04:06:28 while I have some disagreements with him,

04:06:29 like he understands way, way more of the story

04:06:33 than almost anyone else

04:06:34 in such a large scale corporate leadership position, right?

04:06:38 It’s terrible how little understanding

04:06:40 of these fundamental issues exists out there now.

04:06:45 That may be different five or 10 years from now though,

04:06:47 because I can see understanding of AGI and longevity

04:06:51 and other such issues is certainly much stronger

04:06:54 and more prevalent now than 10 or 15 years ago, right?

04:06:57 So I mean, humanity as a whole can be slow learners

04:07:02 relative to what I would like,

04:07:05 but on a historical sense, on the other hand,

04:07:08 you could say the progress is astoundingly fast.

04:07:11 But Elon also said, I think on the Joe Rogan podcast,

04:07:15 that love is the answer.

04:07:17 So maybe in that way, you and him are both on the same page

04:07:21 of how we should proceed with AGI.

04:07:24 I think there’s no better place to end it.

04:07:27 I hope we get to talk again about the hat

04:07:30 and about consciousness

04:07:32 and about a million topics we didn’t cover.

04:07:34 Ben, it’s a huge honor to talk to you.

04:07:36 Thank you for making it out.

04:07:37 Thank you for talking today.

04:07:39 Thanks for having me.

04:07:40 This was really, really good fun

04:07:44 and we dug deep into some very important things.

04:07:47 So thanks for doing this.

04:07:48 Thanks very much.

04:07:49 Awesome.

04:07:51 Thanks for listening to this conversation with Ben Gertzel

04:07:53 and thank you to our sponsors,

04:07:55 The Jordan Harbinger Show and Masterclass.

04:07:59 Please consider supporting the podcast

04:08:01 by going to jordanharbinger.com slash lex

04:08:04 and signing up to Masterclass at masterclass.com slash lex.

04:08:09 Click the links, buy the stuff.

04:08:12 It’s the best way to support this podcast

04:08:14 and the journey I’m on in my research and startup.

04:08:18 If you enjoy this thing, subscribe on YouTube,

04:08:21 review it with five stars on a podcast,

04:08:23 support it on Patreon or connect with me on Twitter

04:08:26 at lexfriedman spelled without the E, just F R I D M A N.

04:08:32 I’m sure eventually you will figure it out.

04:08:35 And now let me leave you with some words from Ben Gertzel.

04:08:39 Our language for describing emotions is very crude.

04:08:42 That’s what music is for.

04:08:43 Thank you for listening and hope to see you next time.