Risto Miikkulainen: Neuroevolution and Evolutionary Computation #177

Transcript

00:00:00 The following is a conversation with Risto Michaelainen,

00:00:02 a computer scientist at University of Texas at Austin

00:00:05 and Associate Vice President

00:00:07 of Evolutionary Artificial Intelligence at Cognizant.

00:00:11 He specializes in evolutionary computation,

00:00:14 but also many other topics in artificial intelligence,

00:00:17 cognitive science, and neuroscience.

00:00:19 Quick mention of our sponsors,

00:00:21 Jordan Harbin’s show, Grammarly, Belcampo, and Indeed.

00:00:26 Check them out in the description to support this podcast.

00:00:30 As a side note, let me say that nature inspired algorithms

00:00:34 from ant colony optimization to genetic algorithms

00:00:36 to cellular automata to neural networks

00:00:39 have always captivated my imagination,

00:00:41 not only for their surprising power

00:00:43 in the face of long odds,

00:00:45 but because they always opened up doors

00:00:47 to new ways of thinking about computation.

00:00:50 It does seem that in the long arc of computing history,

00:00:54 running toward biology, not running away from it

00:00:57 is what leads to long term progress.

00:01:00 This is the Lex Friedman podcast,

00:01:03 and here is my conversation with Risto Michaelainen.

00:01:07 If we ran the Earth experiment,

00:01:10 this fun little experiment we’re on,

00:01:12 over and over and over and over a million times

00:01:15 and watch the evolution of life as it pans out,

00:01:19 how much variation in the outcomes of that evolution

00:01:21 do you think we would see?

00:01:23 Now, we should say that you are a computer scientist.

00:01:27 That’s actually not such a bad question

00:01:29 for a computer scientist,

00:01:30 because we are building simulations of these things,

00:01:34 and we are simulating evolution,

00:01:36 and that’s a difficult question to answer in biology,

00:01:38 but we can build a computational model

00:01:40 and run it million times and actually answer that question.

00:01:43 How much variation do we see when we simulate it?

00:01:47 And that’s a little bit beyond what we can do today,

00:01:50 but I think that we will see some regularities,

00:01:54 and it took evolution also a really long time

00:01:56 to get started,

00:01:57 and then things accelerated really fast towards the end.

00:02:02 But there are things that need to be discovered,

00:02:04 and they probably will be over and over again,

00:02:06 like manipulation of objects,

00:02:10 opposable thumbs,

00:02:11 and also some way to communicate,

00:02:16 maybe orally, like when you have speech,

00:02:18 it might be some other kind of sounds,

00:02:20 and decision making, but also vision.

00:02:24 Eye has evolved many times.

00:02:26 Various vision systems have evolved.

00:02:28 So we would see those kinds of solutions,

00:02:30 I believe, emerge over and over again.

00:02:32 They may look a little different,

00:02:34 but they get the job done.

00:02:36 The really interesting question is,

00:02:37 would we have primates?

00:02:38 Would we have humans or something that resembles humans?

00:02:43 And would that be an apex of evolution after a while?

00:02:47 We don’t know where we’re going from here,

00:02:48 but we certainly see a lot of tool use

00:02:51 and building, constructing our environment.

00:02:54 So I think that we will get that.

00:02:56 We get some evolution producing,

00:02:58 some agents that can do that,

00:03:00 manipulate the environment and build.

00:03:02 What do you think is special about humans?

00:03:04 Like if you were running the simulation

00:03:06 and you observe humans emerge,

00:03:08 like these tool makers,

00:03:09 they start a fire and all this stuff,

00:03:11 start running around, building buildings,

00:03:12 and then running for president and all those kinds of things.

00:03:15 What would be, how would you detect that?

00:03:19 Cause you’re like really busy

00:03:20 as the creator of this evolutionary system.

00:03:23 So you don’t have much time to observe,

00:03:25 like detect if any cool stuff came up, right?

00:03:28 How would you detect humans?

00:03:31 Well, you are running the simulation.

00:03:33 So you also put in visualization

00:03:37 and measurement techniques there.

00:03:39 So if you are looking for certain things like communication,

00:03:44 you’ll have detectors to find out whether that’s happening,

00:03:48 even if it’s a large simulation.

00:03:50 And I think that that’s what we would do.

00:03:53 We know roughly what we want,

00:03:56 intelligent agents that communicate, cooperate, manipulate,

00:04:01 and we would build detections

00:04:03 and visualizations of those processes.

00:04:05 Yeah, and there’s a lot of,

00:04:08 we’d have to run it many times

00:04:09 and we have plenty of time to figure out

00:04:11 how we detect the interesting things.

00:04:13 But also, I think we do have to run it many times

00:04:16 because we don’t quite know what shape those will take

00:04:21 and our detectors may not be perfect for them

00:04:23 at the beginning.

00:04:24 Well, that seems really difficult to build a detector

00:04:27 of intelligent or intelligent communication.

00:04:32 Sort of, if we take an alien perspective,

00:04:35 observing earth, are you sure that they would be able

00:04:39 to detect humans as the special thing?

00:04:41 Wouldn’t they be already curious about other things?

00:04:43 There’s way more insects by body mass, I think,

00:04:47 than humans by far, and colonies.

00:04:50 Obviously, dolphins is the most intelligent creature

00:04:53 on earth, we all know this.

00:04:55 So it could be the dolphins that they detect.

00:04:58 It could be the rockets that we seem to be launching.

00:05:00 That could be the intelligent creature they detect.

00:05:03 It could be some other trees.

00:05:06 Trees have been here a long time.

00:05:07 I just learned that sharks have been here

00:05:10 400 million years and that’s longer

00:05:13 than trees have been here.

00:05:15 So maybe it’s the sharks, they go by age.

00:05:17 Like there’s a persistent thing.

00:05:19 Like if you survive long enough,

00:05:20 especially through the mass extinctions,

00:05:22 that could be the thing your detector is detecting.

00:05:25 Humans have been here for a very short time

00:05:27 and we’re just creating a lot of pollution,

00:05:30 but so is the other creatures.

00:05:31 So I don’t know, do you think you’d be able

00:05:34 to detect humans?

00:05:35 Like how would you go about detecting

00:05:37 in the computational sense?

00:05:39 Maybe we can leave humans behind.

00:05:40 In the computational sense, detect interesting things.

00:05:46 Do you basically have to have a strict objective function

00:05:48 by which you measure the performance of a system

00:05:51 or can you find curiosities and interesting things?

00:05:55 Yeah, well, I think that the first measurement

00:05:59 would be to detect how much of an effect

00:06:02 you can have in your environment.

00:06:03 So if you look around, we have cities

00:06:06 and that is constructed environments.

00:06:08 And that’s where a lot of people live, most people live.

00:06:11 So that would be a good sign of intelligence

00:06:15 that you don’t just live in an environment,

00:06:17 but you construct it to your liking.

00:06:20 And that’s something pretty unique.

00:06:21 I mean, there are certainly birds build nests

00:06:24 but they don’t build quite cities.

00:06:25 Termites build mounds and ice and things like that.

00:06:29 But the complexity of the human construction cities,

00:06:32 I think would stand out even to an external observer.

00:06:34 Of course, that’s what a human would say.

00:06:36 Yeah, and you know, you can certainly say

00:06:39 that sharks are really smart

00:06:41 because they’ve been around so long

00:06:43 and they haven’t destroyed their environment,

00:06:45 which humans are about to do,

00:06:46 which is not a very smart thing.

00:06:48 But we’ll get over it, I believe.

00:06:52 And we can get over it by doing some construction

00:06:55 that actually is benign

00:06:56 and maybe even enhances the resilience of nature.

00:07:02 So you mentioned the simulation that we run over and over

00:07:05 might start, it’s a slow start.

00:07:08 So do you think how unlikely, first of all,

00:07:12 I don’t know if you think about this kind of stuff,

00:07:14 but how unlikely is step number zero,

00:07:18 which is the springing up,

00:07:20 like the origin of life on earth?

00:07:22 And second, how unlikely is the,

00:07:27 anything interesting happening beyond that?

00:07:30 So like the start that creates

00:07:34 all the rich complexity that we see on earth today.

00:07:36 Yeah, there are people who are working

00:07:38 on exactly that problem from primordial soup.

00:07:42 How do you actually get self replicating molecules?

00:07:45 And they are very close.

00:07:48 With a little bit of help, you can make that happen.

00:07:51 So of course we know what we want,

00:07:55 so they can set up the conditions

00:07:57 and try out conditions that are conducive to that.

00:08:00 For evolution to discover that, that took a long time.

00:08:04 For us to recreate it probably won’t take that long.

00:08:07 And the next steps from there,

00:08:10 I think also with some handholding,

00:08:12 I think we can make that happen.

00:08:15 But with evolution, what was really fascinating

00:08:18 was eventually the runaway evolution of the brain

00:08:22 that created humans and created,

00:08:24 well, also other higher animals,

00:08:27 that that was something that happened really fast.

00:08:29 And that’s a big question.

00:08:32 Is that something replicable?

00:08:33 Is that something that can happen?

00:08:35 And if it happens, does it go in the same direction?

00:08:39 That is a big question to ask.

00:08:40 Even in computational terms,

00:08:42 I think that it’s relatively possible to come up here,

00:08:47 create an experiment where we look at the primordial soup

00:08:49 and the first couple of steps

00:08:51 of multicellular organisms even.

00:08:53 But to get something as complex as the brain,

00:08:57 we don’t quite know the conditions for that.

00:08:59 And how do you even get started

00:09:01 and whether we can get this kind of runaway evolution

00:09:03 happening?

00:09:05 From a detector perspective,

00:09:09 if we’re observing this evolution,

00:09:10 what do you think is the brain?

00:09:12 What do you think is the, let’s say, what is intelligence?

00:09:15 So in terms of the thing that makes humans special,

00:09:18 we seem to be able to reason,

00:09:21 we seem to be able to communicate.

00:09:23 But the core of that is this something

00:09:26 in the broad category we might call intelligence.

00:09:29 So if you put your computer scientist hat on,

00:09:33 is there a favorite ways you like to think about

00:09:37 that question of what is intelligence?

00:09:41 Well, my goal is to create agents that are intelligent.

00:09:48 Not to define what.

00:09:49 And that is a way of defining it.

00:09:52 And that means that it’s some kind of an object

00:09:57 or a program that has limited sensory

00:10:02 and effective capabilities interacting with the world.

00:10:08 And then also a mechanism for making decisions.

00:10:11 So with limited abilities like that, can it survive?

00:10:17 Survival is the simplest goal,

00:10:18 but you could also give it other goals.

00:10:20 Can it multiply?

00:10:21 Can it solve problems that you give it?

00:10:24 And that is quite a bit less than human intelligence.

00:10:27 There are, animals would be intelligent, of course,

00:10:29 with that definition.

00:10:31 And you might have even some other forms of life, even.

00:10:35 So intelligence in that sense is a survival skill

00:10:41 given resources that you have and using your resources

00:10:44 so that you will stay around.

00:10:47 Do you think death, mortality is fundamental to an agent?

00:10:53 So like there’s, I don’t know if you’re familiar,

00:10:55 there’s a philosopher named Ernest Becker

00:10:56 who wrote The Denial of Death and his whole idea.

00:11:01 And there’s folks, psychologists, cognitive scientists

00:11:04 that work on terror management theory.

00:11:06 And they think that one of the special things about humans

00:11:10 is that we’re able to sort of foresee our death, right?

00:11:13 We can realize not just as animals do,

00:11:16 sort of constantly fear in an instinctual sense,

00:11:19 respond to all the dangers that are out there,

00:11:21 but like understand that this ride ends eventually.

00:11:25 And that in itself is the force behind

00:11:29 all of the creative efforts of human nature.

00:11:32 That’s the philosophy.

00:11:33 I think that makes sense, a lot of sense.

00:11:35 I mean, animals probably don’t think of death the same way,

00:11:38 but humans know that your time is limited

00:11:40 and you wanna make it count.

00:11:43 And you can make it count in many different ways,

00:11:44 but I think that has a lot to do with creativity

00:11:47 and the need for humans to do something

00:11:50 beyond just surviving.

00:11:51 And now going from that simple definition

00:11:54 to something that’s the next level,

00:11:56 I think that that could be the second level of definition,

00:12:00 that intelligence means something,

00:12:03 that you do something that stays behind you,

00:12:05 that’s more than your existence.

00:12:09 You create something that is useful for others,

00:12:12 is useful in the future, not just for yourself.

00:12:15 And I think that’s the nicest definition of intelligence

00:12:17 within a next level.

00:12:19 And it’s also nice because it doesn’t require

00:12:23 that they are humans or biological.

00:12:25 They could be artificial agents that are intelligence.

00:12:28 They could achieve those kind of goals.

00:12:30 So particular agent, the ripple effects of their existence

00:12:35 on the entirety of the system is significant.

00:12:38 So like they leave a trace where there’s like a,

00:12:41 yeah, like ripple effects.

00:12:43 But see, then you go back to the butterfly

00:12:46 with the flap of a wing and then you can trace

00:12:48 a lot of like nuclear wars

00:12:50 and all the conflicts of human history,

00:12:52 somehow connected to that one butterfly

00:12:54 that created all of the chaos.

00:12:56 So maybe that’s not, maybe that’s a very poetic way

00:13:00 to think that that’s something we humans

00:13:03 in a human centric way wanna hope we have this impact.

00:13:09 Like that is the secondary effect of our intelligence.

00:13:12 We’ve had the long lasting impact on the world,

00:13:14 but maybe the entirety of physics in the universe

00:13:20 has a very long lasting effects.

00:13:22 Sure, but you can also think of it.

00:13:25 What if like the wonderful life, what if you’re not here?

00:13:29 Will somebody else do this?

00:13:31 Is it something that you actually contributed

00:13:34 because you had something unique to compute?

00:13:36 That contribute, that’s a pretty high bar though.

00:13:39 Uniqueness, yeah.

00:13:40 So, you have to be Mozart or something to actually

00:13:45 reach that level that nobody would have developed that,

00:13:47 but other people might have solved this equation

00:13:51 if you didn’t do it, but also within limited scope.

00:13:55 I mean, during your lifetime or next year,

00:14:00 you could contribute something that unique

00:14:02 that other people did not see.

00:14:04 And then that could change the way things move forward

00:14:09 for a while.

00:14:11 So, I don’t think we have to be Mozart

00:14:14 to be called intelligence,

00:14:15 but we have this local effect that is changing.

00:14:18 If you weren’t there, that would not have happened.

00:14:20 And it’s a positive effect, of course,

00:14:21 you want it to be a positive effect.

00:14:23 Do you think it’s possible to engineer

00:14:25 into computational agents, a fear of mortality?

00:14:30 Like, does that make any sense?

00:14:35 So, there’s a very trivial thing where it’s like,

00:14:38 you could just code in a parameter,

00:14:39 which is how long the life ends,

00:14:41 but more of a fear of mortality,

00:14:45 like awareness of the way that things end

00:14:48 and somehow encoding a complex representation of that fear,

00:14:54 which is like, maybe as it gets closer,

00:14:56 you become more terrified.

00:14:58 I mean, there seems to be something really profound

00:15:01 about this fear that’s not currently encodable

00:15:04 in a trivial way into our programs.

00:15:08 Well, I think you’re referring to the emotion of fear,

00:15:11 something, because we have cognitively,

00:15:13 we know that we have limited lifespan

00:15:16 and most of us cope with it by just,

00:15:18 hey, that’s what the world is like

00:15:19 and I make the most of it.

00:15:20 But sometimes you can have like a fear that’s not healthy,

00:15:26 that paralyzes you, that you can’t do anything.

00:15:29 And somewhere in between there,

00:15:31 not caring at all and getting paralyzed because of fear

00:15:36 is a normal response,

00:15:37 which is a little bit more than just logic

00:15:39 and it’s emotion.

00:15:41 So now the question is, what good are emotions?

00:15:43 I mean, they are quite complex

00:15:46 and there are multiple dimensions of emotions

00:15:48 and they probably do serve a survival function,

00:15:53 heightened focus, for instance.

00:15:55 And fear of death might be a really good emotion

00:15:59 when you are in danger, that you recognize it,

00:16:02 even if it’s not logically necessarily easy to derive

00:16:06 and you don’t have time for that logical deduction,

00:16:10 you may be able to recognize the situation is dangerous

00:16:12 and this fear kicks in and you all of a sudden perceive

00:16:16 the facts that are important for that.

00:16:18 And I think that’s generally is the role of emotions.

00:16:21 It allows you to focus what’s relevant for your situation.

00:16:24 And maybe if fear of death plays the same kind of role,

00:16:27 but if it consumes you and it’s something that you think

00:16:30 in normal life when you don’t have to,

00:16:32 then it’s not healthy and then it’s not productive.

00:16:34 Yeah, but it’s fascinating to think

00:16:36 how to incorporate emotion into a computational agent.

00:16:41 It almost seems like a silly statement to make,

00:16:45 but it perhaps seems silly because we have

00:16:48 such a poor understanding of the mechanism of emotion,

00:16:51 of fear, of, I think at the core of it

00:16:56 is another word that we know nothing about,

00:17:00 but say a lot, which is consciousness.

00:17:03 Do you ever in your work, or like maybe on a coffee break,

00:17:08 think about what the heck is this thing consciousness

00:17:11 and is it at all useful in our thinking about AI systems?

00:17:14 Yes, it is an important question.

00:17:18 You can build representations and functions,

00:17:23 I think into these agents that act like emotions

00:17:26 and consciousness perhaps.

00:17:28 So I mentioned emotions being something

00:17:31 that allow you to focus and pay attention,

00:17:34 filter out what’s important.

00:17:35 Yeah, you can have that kind of a filter mechanism

00:17:38 and it puts you in a different state.

00:17:40 Your computation is in a different state.

00:17:42 Certain things don’t really get through

00:17:43 and others are heightened.

00:17:46 Now you label that box emotion.

00:17:48 I don’t know if that means it’s an emotion,

00:17:49 but it acts very much like we understand

00:17:52 what emotions are.

00:17:54 And we actually did some work like that,

00:17:56 modeling hyenas who were trying to steal a kill from lions,

00:18:02 which happens in Africa.

00:18:03 I mean, hyenas are quite intelligent,

00:18:05 but not really intelligent.

00:18:08 And they have this behavior

00:18:11 that’s more complex than anything else they do.

00:18:14 They can band together, if there’s about 30 of them or so,

00:18:17 they can coordinate their effort

00:18:20 so that they push the lions away from a kill.

00:18:22 Even though the lions are so strong

00:18:24 that they could kill a hyena by striking with a paw.

00:18:28 But when they work together and precisely time this attack,

00:18:31 the lions will leave and they get the kill.

00:18:34 And probably there are some states

00:18:38 like emotions that the hyenas go through.

00:18:40 The first, they call for reinforcements.

00:18:43 They really want that kill, but there’s not enough of them.

00:18:45 So they vocalize and there’s more people,

00:18:48 more hyenas that come around.

00:18:50 And then they have two emotions.

00:18:52 They’re very afraid of the lion, so they want to stay away,

00:18:55 but they also have a strong affiliation between each other.

00:18:59 And then this is the balance of the two emotions.

00:19:02 And also, yes, they also want the kill.

00:19:04 So it’s both repelled and attractive.

00:19:07 But then this affiliation eventually is so strong

00:19:10 that when they move, they move together,

00:19:12 they act as a unit and they can perform that function.

00:19:15 So there’s an interesting behavior

00:19:18 that seems to depend on these emotions strongly

00:19:21 and makes it possible, coordinate the actions.

00:19:24 And I think a critical aspect of that,

00:19:28 the way you’re describing is emotion there

00:19:30 is a mechanism of social communication,

00:19:34 of a social interaction.

00:19:35 Maybe humans won’t even be that intelligent

00:19:40 or most things we think of as intelligent

00:19:42 wouldn’t be that intelligent without the social component

00:19:45 of interaction.

00:19:47 Maybe much of our intelligence

00:19:48 is essentially an outgrowth of social interaction.

00:19:52 And maybe for the creation of intelligent agents,

00:19:55 we have to be creating fundamentally social systems.

00:19:58 Yes, I strongly believe that’s true.

00:20:01 And yes, the communication is multifaceted.

00:20:05 I mean, they vocalize and call for friends,

00:20:08 but they also rub against each other and they push

00:20:11 and they do all kinds of gestures and so on.

00:20:14 So they don’t act alone.

00:20:15 And I don’t think people act alone very much either,

00:20:18 at least normal, most of the time.

00:20:21 And social systems are so strong for humans

00:20:25 that I think we build everything

00:20:26 on top of these kinds of structures.

00:20:28 And one interesting theory around that,

00:20:30 bigotness theory, for instance, for language,

00:20:32 but language origins is that where did language come from?

00:20:36 And it’s a plausible theory that first came social systems,

00:20:41 that you have different roles in a society.

00:20:45 And then those roles are exchangeable,

00:20:47 that I scratch your back, you scratch my back,

00:20:49 we can exchange roles.

00:20:51 And once you have the brain structures

00:20:53 that allow you to understand actions

00:20:54 in terms of roles that can be changed,

00:20:57 that’s the basis for language, for grammar.

00:20:59 And now you can start using symbols

00:21:02 to refer to objects in the world.

00:21:04 And you have this flexible structure.

00:21:06 So there’s a social structure

00:21:09 that’s fundamental for language to develop.

00:21:12 Now, again, then you have language,

00:21:13 you can refer to things that are not here right now.

00:21:17 And that allows you to then build all the good stuff

00:21:20 about planning, for instance, and building things and so on.

00:21:24 So yeah, I think that very strongly humans are social

00:21:28 and that gives us ability to structure the world.

00:21:33 But also as a society, we can do so much more

00:21:35 because one person does not have to do everything.

00:21:38 You can have different roles

00:21:39 and together achieve a lot more.

00:21:41 And that’s also something

00:21:42 we see in computational simulations today.

00:21:44 I mean, we have multi agent systems that can perform tasks.

00:21:47 This fascinating demonstration, Marco Dorego,

00:21:50 I think it was, these little robots

00:21:53 that had to navigate through an environment

00:21:54 and there were things that are dangerous,

00:21:57 like maybe a big chasm or some kind of groove, a hole,

00:22:02 and they could not get across it.

00:22:03 But if they grab each other with their gripper,

00:22:06 they formed a robot that was much longer under the team

00:22:09 and this way they could get across that.

00:22:12 So this is a great example of how together

00:22:15 we can achieve things we couldn’t otherwise.

00:22:17 Like the hyenas, you know, alone they couldn’t,

00:22:19 but as a team they could.

00:22:21 And I think humans do that all the time.

00:22:23 We’re really good at that.

00:22:24 Yeah, and the way you described the system of hyenas,

00:22:27 it almost sounds algorithmic.

00:22:29 Like the problem with humans is they’re so complex,

00:22:32 it’s hard to think of them as algorithms.

00:22:35 But with hyenas, there’s a, it’s simple enough

00:22:39 to where it feels like, at least hopeful

00:22:42 that it’s possible to create computational systems

00:22:46 that mimic that.

00:22:48 Yeah, that’s exactly why we looked at that.

00:22:51 As opposed to humans.

00:22:54 Like I said, they are intelligent,

00:22:55 but they are not quite as intelligent as say, baboons,

00:22:59 which would learn a lot and would be much more flexible.

00:23:02 The hyenas are relatively rigid in what they can do.

00:23:05 And therefore you could look at this behavior,

00:23:08 like this is a breakthrough in evolution about to happen.

00:23:11 That they’ve discovered something about social structures,

00:23:14 communication, about cooperation,

00:23:17 and it might then spill over to other things too

00:23:20 in thousands of years in the future.

00:23:22 Yeah, I think the problem with baboons and humans

00:23:24 is probably too much is going on inside the head.

00:23:27 We won’t be able to measure it if we’re observing the system.

00:23:30 With hyenas, it’s probably easier to observe

00:23:34 the actual decision making and the various motivations

00:23:37 that are involved.

00:23:38 Yeah, they are visible.

00:23:40 And we can even quantify possibly their emotional state

00:23:45 because they leave droppings behind.

00:23:48 And there are chemicals there that can be associated

00:23:50 with neurotransmitters.

00:23:52 And we can separate what emotions they might have

00:23:55 experienced in the last 24 hours.

00:23:58 Yeah.

00:23:59 What to you is the most beautiful, speaking of hyenas,

00:24:04 what to you is the most beautiful nature inspired algorithm

00:24:08 in your work that you’ve come across?

00:24:09 Something maybe early on in your work or maybe today?

00:24:14 I think evolution computation is the most amazing method.

00:24:19 So what fascinates me most is that with computers

00:24:23 is that you can get more out than you put in.

00:24:26 I mean, you can write a piece of code

00:24:29 and your machine does what you told it.

00:24:31 I mean, this happened to me in my freshman year.

00:24:34 It did something very simple and I was just amazed.

00:24:37 I was blown away that it would get the number

00:24:39 and it would compute the result.

00:24:41 And I didn’t have to do it myself.

00:24:43 Very simple.

00:24:44 But if you push that a little further,

00:24:46 you can have machines that learn and they might learn patterns.

00:24:50 And already say deep learning neural networks,

00:24:53 they can learn to recognize objects, sounds,

00:24:58 patterns that humans have trouble with.

00:25:00 And sometimes they do it better than humans.

00:25:02 And that’s so fascinating.

00:25:04 And now if you take that one more step,

00:25:06 you get something like evolutionary algorithms

00:25:08 that discover things, they create things,

00:25:10 they come up with solutions that you did not think of.

00:25:13 And that just blows me away.

00:25:15 It’s so great that we can build systems, algorithms

00:25:18 that can be in some sense smarter than we are,

00:25:21 that they can discover solutions that we might miss.

00:25:24 A lot of times it is because we have as humans,

00:25:26 we have certain biases,

00:25:27 we expect the solutions to be certain way

00:25:30 and you don’t put those biases into the algorithm

00:25:32 so they are more free to explore.

00:25:34 And evolution is just absolutely fantastic explorer.

00:25:37 And that’s what really is fascinating.

00:25:40 Yeah, I think I get made fun of a bit

00:25:43 because I currently don’t have any kids,

00:25:45 but you mentioned programs.

00:25:47 I mean, do you have kids?

00:25:50 Yeah.

00:25:51 So maybe you could speak to this,

00:25:52 but there’s a magic to the creative process.

00:25:55 Like with Spot, the Boston Dynamics Spot,

00:25:59 but really any robot that I’ve ever worked on,

00:26:02 it just feels like the similar kind of joy

00:26:04 I imagine I would have as a father.

00:26:06 Not the same perhaps level,

00:26:08 but like the same kind of wonderment.

00:26:10 Like there’s exactly this,

00:26:11 which is like you know what you had to do initially

00:26:17 to get this thing going.

00:26:19 Let’s speak on the computer science side,

00:26:21 like what the program looks like,

00:26:23 but something about it doing more

00:26:27 than what the program was written on paper

00:26:30 is like that somehow connects to the magic

00:26:34 of this entire universe.

00:26:36 Like that’s like, I feel like I found God.

00:26:39 Every time I like, it’s like,

00:26:42 because you’ve really created something that’s living.

00:26:45 Yeah.

00:26:46 Even if it’s a simple program.

00:26:47 It has a life of its own, it has the intelligence of its own.

00:26:48 It’s beyond what you actually thought.

00:26:51 Yeah.

00:26:51 And that is, I think it’s exactly spot on.

00:26:53 That’s exactly what it’s about.

00:26:55 You created something and it has a ability

00:26:57 to live its life and do good things

00:27:00 and you just gave it a starting point.

00:27:03 So in that sense, I think it’s,

00:27:04 that may be part of the joy actually.

00:27:06 But you mentioned creativity in this context,

00:27:11 especially in the context of evolutionary computation.

00:27:14 So, we don’t often think of algorithms as creative.

00:27:18 So how do you think about creativity?

00:27:21 Yeah, algorithms absolutely can be creative.

00:27:24 They can come up with solutions that you don’t think about.

00:27:28 I mean, creativity can be defined.

00:27:29 A couple of requirements has to be new.

00:27:32 It has to be useful and it has to be surprising.

00:27:35 And those certainly are true with, say,

00:27:38 evolutionary computation discovering solutions.

00:27:41 So maybe an example, for instance,

00:27:44 we did this collaboration with MIT Media Lab,

00:27:47 Caleb Harbus Lab, where they had

00:27:50 a hydroponic food computer, they called it,

00:27:54 environment that was completely computer controlled,

00:27:56 nutrients, water, light, temperature,

00:27:59 everything is controlled.

00:28:00 Now, what do you do if you can control everything?

00:28:05 Farmers know a lot about how to make plants grow

00:28:08 in their own patch of land.

00:28:10 But if you can control everything, it’s too much.

00:28:13 And it turns out that we don’t actually

00:28:14 know very much about it.

00:28:16 So we built a system, evolutionary optimization system,

00:28:20 together with a surrogate model of how plants grow

00:28:23 and let this system explore recipes on its own.

00:28:28 And initially, we were focusing on light,

00:28:32 how strong, what wavelengths, how long the light was on.

00:28:36 And we put some boundaries which we thought were reasonable.

00:28:40 For instance, that there was at least six hours of darkness,

00:28:44 like night, because that’s what we have in the world.

00:28:47 And very quickly, the system, evolution,

00:28:51 pushed all the recipes to that limit.

00:28:54 We were trying to grow basil.

00:28:55 And we initially had some 200, 300 recipes,

00:29:00 exploration as well as known recipes.

00:29:02 But now we are going beyond that.

00:29:04 And everything was pushed to that limit.

00:29:06 So we look at it and say, well, we can easily just change it.

00:29:09 Let’s have it your way.

00:29:10 And it turns out the system discovered

00:29:13 that basil does not need to sleep.

00:29:16 24 hours, lights on, and it will thrive.

00:29:19 It will be bigger, it will be tastier.

00:29:21 And this was a big surprise, not just to us,

00:29:24 but also the biologists in the team

00:29:26 that anticipated that there are some constraints

00:29:30 that are in the world for a reason.

00:29:32 It turns out that evolution did not have the same bias.

00:29:36 And therefore, it discovered something that was creative.

00:29:38 It was surprising, it was useful, and it was new.

00:29:41 That’s fascinating to think about the things we think

00:29:44 that are fundamental to living systems on Earth today,

00:29:48 whether they’re actually fundamental

00:29:49 or they somehow fit the constraints of the system.

00:29:53 And all we have to do is just remove the constraints.

00:29:56 Do you ever think about,

00:29:59 I don’t know how much you know

00:30:00 about brain computer interfaces in your link.

00:30:03 The idea there is our brains are very limited.

00:30:08 And if we just allow, we plug in,

00:30:11 we provide a mechanism for a computer

00:30:13 to speak with the brain.

00:30:15 So you’re thereby expanding

00:30:16 the computational power of the brain.

00:30:19 The possibilities there,

00:30:21 from a very high level philosophical perspective,

00:30:25 is limitless.

00:30:27 But I wonder how limitless it is.

00:30:30 Are the constraints we have features

00:30:33 that are fundamental to our intelligence?

00:30:36 Or is this just this weird constraint

00:30:38 in terms of our brain size and skull

00:30:40 and lifespan and senses?

00:30:44 It’s just the weird little quirk of evolution.

00:30:47 And if we just open that up,

00:30:49 like add much more senses,

00:30:51 add much more computational power,

00:30:53 the intelligence will expand exponentially.

00:30:57 Do you have a sense about constraints,

00:31:03 the relationship of evolution and computation

00:31:05 to the constraints of the environment?

00:31:09 Well, at first I’d like to comment on that,

00:31:12 like changing the inputs to human brain.

00:31:16 And flexibility of the brain.

00:31:18 I think there’s a lot of that.

00:31:20 There are experiments that are done in animals

00:31:22 like Mikangazuru at MIT,

00:31:25 switching the auditory and visual information

00:31:29 and going to the wrong part of the cortex.

00:31:31 And the animal was still able to hear

00:31:34 and perceive the visual environment.

00:31:36 And there are kids that are born with severe disorders

00:31:41 and sometimes they have to remove half of the brain,

00:31:43 like one half, and they still grow up.

00:31:46 They have the functions migrate to the other parts.

00:31:48 There’s a lot of flexibility like that.

00:31:50 So I think it’s quite possible to hook up the brain

00:31:55 with different kinds of sensors, for instance,

00:31:57 and something that we don’t even quite understand

00:32:00 or have today on different kinds of wavelengths

00:32:02 or whatever they are.

00:32:04 And then the brain can learn to make sense of it.

00:32:07 And that I think is this good hope

00:32:09 that these prosthetic devices, for instance, work,

00:32:12 not because we make them so good and so easy to use,

00:32:15 but the brain adapts to them

00:32:17 and can learn to take advantage of them.

00:32:20 And so in that sense, if there’s a trouble, a problem,

00:32:23 I think the brain can be used to correct it.

00:32:26 Now going beyond what we have today, can you get smarter?

00:32:29 That’s really much harder to do.

00:32:31 Giving the brain more input probably might overwhelm it.

00:32:35 It would have to learn to filter it and focus

00:32:39 and in order to use the information effectively

00:32:43 and augmenting intelligence

00:32:46 with some kind of external devices like that

00:32:49 might be difficult, I think.

00:32:51 But replacing what’s lost, I think is quite possible.

00:32:55 Right, so our intuition allows us to sort of imagine

00:32:59 that we can replace what’s been lost,

00:33:01 but expansion beyond what we have,

00:33:03 I mean, we’re already one of the most,

00:33:05 if not the most intelligent things on this earth, right?

00:33:07 So it’s hard to imagine.

00:33:09 But if the brain can hold up with an order of magnitude

00:33:14 greater set of information thrown at it,

00:33:18 if it can do, if it can reason through that.

00:33:20 Part of me, this is the Russian thing, I think,

00:33:22 is I tend to think that the limitations

00:33:25 is where the superpower is,

00:33:27 that immortality and a huge increase in bandwidth

00:33:32 of information by connecting computers with the brain

00:33:37 is not going to produce greater intelligence.

00:33:39 It might produce lesser intelligence.

00:33:41 So I don’t know, there’s something about the scarcity

00:33:45 being essential to fitness or performance,

00:33:52 but that could be just because we’re so limited.

00:33:56 No, exactly, you make do with what you have,

00:33:57 but you don’t have to be a genius

00:34:00 but you don’t have to pipe it directly to the brain.

00:34:04 I mean, we already have devices like phones

00:34:07 where we can look up information at any point.

00:34:10 And that can make us more productive.

00:34:12 You don’t have to argue about, I don’t know,

00:34:14 what happened in that baseball game or whatever it is,

00:34:16 because you can look it up right away.

00:34:17 And I think in that sense, we can learn to utilize tools.

00:34:22 And that’s what we have been doing for a long, long time.

00:34:27 And we are already, the brain is already drinking

00:34:29 the water, firehose, like vision.

00:34:32 There’s way more information in vision

00:34:34 that we actually process.

00:34:35 So brain is already good at identifying what matters.

00:34:39 And that we can switch that from vision

00:34:42 to some other wavelength or some other kind of modality.

00:34:44 But I think that the same processing principles

00:34:47 probably still apply.

00:34:49 But also indeed this ability to have information

00:34:53 more accessible and more relevant,

00:34:55 I think can enhance what we do.

00:34:57 I mean, kids today at school, they learn about DNA.

00:35:00 I mean, things that were discovered

00:35:02 just a couple of years ago.

00:35:04 And it’s already common knowledge

00:35:06 and we are building on it.

00:35:07 And we don’t see a problem where

00:35:12 there’s too much information that we can absorb and learn.

00:35:15 Maybe people become a little bit more narrow

00:35:17 in what they know, they are in one field.

00:35:20 But this information that we have accumulated,

00:35:23 it is passed on and people are picking up on it

00:35:26 and they are building on it.

00:35:27 So it’s not like we have reached the point of saturation.

00:35:30 We have still this process that allows us to be selective

00:35:34 and decide what’s interesting, I think still works

00:35:37 even with the more information we have today.

00:35:40 Yeah, it’s fascinating to think about

00:35:43 like Wikipedia becoming a sensor.

00:35:45 Like, so the fire hose of information from Wikipedia.

00:35:49 So it’s like you integrated directly into the brain

00:35:51 to where you’re thinking, like you’re observing the world

00:35:54 with all of Wikipedia directly piping into your brain.

00:35:57 So like when I see a light,

00:35:59 I immediately have like the history of who invented

00:36:03 electricity, like integrated very quickly into.

00:36:07 So just the way you think about the world

00:36:09 might be very interesting

00:36:11 if you can integrate that kind of information.

00:36:13 What are your thoughts, if I could ask on early steps

00:36:18 on the Neuralink side?

00:36:20 I don’t know if you got a chance to see,

00:36:21 but there was a monkey playing pong

00:36:25 through the brain computer interface.

00:36:27 And the dream there is sort of,

00:36:30 you’re already replacing the thumbs essentially

00:36:33 that you would use to play video game.

00:36:35 The dream is to be able to increase further

00:36:40 the interface by which you interact with the computer.

00:36:43 Are you impressed by this?

00:36:44 Are you worried about this?

00:36:46 What are your thoughts as a human?

00:36:47 I think it’s wonderful.

00:36:48 I think it’s great that we could do something

00:36:51 like that.

00:36:52 I mean, there are devices that read your EEG for instance,

00:36:56 and humans can learn to control things

00:37:00 using just their thoughts in that sense.

00:37:02 And I don’t think it’s that different.

00:37:04 I mean, those signals would go to limbs,

00:37:06 they would go to thumbs.

00:37:08 Now the same signals go through a sensor

00:37:11 to some computing system.

00:37:13 It still probably has to be built on human terms,

00:37:17 not to overwhelm them, but utilize what’s there

00:37:20 and sense the right kind of patterns

00:37:23 that are easy to generate.

00:37:24 But, oh, that I think is really quite possible

00:37:27 and wonderful and could be very much more efficient.

00:37:32 Is there, so you mentioned surprising

00:37:34 being a characteristic of creativity.

00:37:37 Is there something, you already mentioned a few examples,

00:37:39 but is there something that jumps out at you

00:37:41 as was particularly surprising

00:37:44 from the various evolutionary computation systems

00:37:48 you’ve worked on, the solutions that were

00:37:52 come up along the way?

00:37:53 Not necessarily the final solutions,

00:37:55 but maybe things that would even discarded.

00:37:58 Is there something that just jumps to mind?

00:38:00 It happens all the time.

00:38:02 I mean, evolution is so creative,

00:38:05 so good at discovering solutions you don’t anticipate.

00:38:09 A lot of times they are taking advantage of something

00:38:12 that you didn’t think was there,

00:38:13 like a bug in the software, for instance.

00:38:15 A lot of, there’s a great paper,

00:38:17 the community put it together

00:38:19 about surprising anecdotes about evolutionary computation.

00:38:22 A lot of them are indeed, in some software environment,

00:38:25 there was a loophole or a bug

00:38:28 and the system utilizes that.

00:38:30 By the way, for people who want to read it,

00:38:31 it’s kind of fun to read.

00:38:33 It’s called The Surprising Creativity of Digital Evolution,

00:38:36 a collection of anecdotes from the evolutionary computation

00:38:39 and artificial life research communities.

00:38:41 And there’s just a bunch of stories

00:38:43 from all the seminal figures in this community.

00:38:45 You have a story in there that released to you,

00:38:48 at least on the Tic Tac Toe memory bomb.

00:38:51 So can you, I guess, describe that situation

00:38:54 if you think that’s still?

00:38:55 Yeah, that’s a quite a bit smaller scale

00:38:59 than our basic doesn’t need to sleep surprise,

00:39:03 but it was actually done by students in my class,

00:39:06 in a neural nets evolution computation class.

00:39:09 There was an assignment.

00:39:11 It was perhaps a final project

00:39:13 where people built game playing AI, it was an AI class.

00:39:19 And this one, and it was for Tic Tac Toe

00:39:21 or five in a row in a large board.

00:39:24 And this one team evolved a neural network

00:39:28 to make these moves.

00:39:29 And they set it up, the evolution.

00:39:32 They didn’t really know what would come out,

00:39:35 but it turned out that they did really well.

00:39:37 Evolution actually won the tournament.

00:39:38 And most of the time when it won,

00:39:40 it won because the other teams crashed.

00:39:43 And then when we look at it, like what was going on

00:39:45 was that evolution discovered that if it makes a move

00:39:48 that’s really, really far away,

00:39:49 like millions of squares away,

00:39:53 the other teams, the other programs has expanded memory

00:39:57 in order to take that into account

00:39:59 until they run out of memory and crashed.

00:40:01 And then you win a tournament

00:40:03 by crashing all your opponents.

00:40:05 I think that’s quite a profound example,

00:40:08 which probably applies to most games,

00:40:14 from even a game theoretic perspective,

00:40:16 that sometimes to win, you don’t have to be better

00:40:20 within the rules of the game.

00:40:22 You have to come up with ways to break your opponent’s brain,

00:40:28 if it’s a human, like not through violence,

00:40:31 but through some hack where the brain just is not,

00:40:34 you’re basically, how would you put it?

00:40:39 You’re going outside the constraints

00:40:43 of where the brain is able to function.

00:40:45 Expectations of your opponent.

00:40:46 I mean, this was even Kasparov pointed that out

00:40:49 that when Deep Blue was playing against Kasparov,

00:40:51 that it was not playing the same way as Kasparov expected.

00:40:55 And this has to do with not having the same biases.

00:40:59 And that’s really one of the strengths of the AI approach.

00:41:06 Can you at a high level say,

00:41:08 what are the basic mechanisms

00:41:10 of evolutionary computation algorithms

00:41:12 that use something that could be called

00:41:15 an evolutionary approach?

00:41:17 Like how does it work?

00:41:19 What are the connections to the,

00:41:21 what are the echoes of the connection to his biological?

00:41:24 A lot of these algorithms really do take motivation

00:41:27 from biology, but they are caricatures.

00:41:29 You try to essentialize it

00:41:31 and take the elements that you believe matter.

00:41:33 So in evolutionary computation,

00:41:35 it is the creation of variation

00:41:38 and then the selection upon that.

00:41:40 So the creation of variation,

00:41:41 you have to have some mechanism

00:41:43 that allow you to create new individuals

00:41:44 that are very different from what you already have.

00:41:47 That’s the creativity part.

00:41:48 And then you have to have some way of measuring

00:41:50 how well they are doing and using that measure to select

00:41:55 who goes to the next generation and you continue.

00:41:58 So first you also, you have to have

00:42:00 some kind of digital representation of an individual

00:42:03 that can be then modified.

00:42:04 So I guess humans in biological systems

00:42:07 have DNA and all those kinds of things.

00:42:09 And so you have to have similar kind of encodings

00:42:12 in a computer program.

00:42:13 Yes, and that is a big question.

00:42:15 How do you encode these individuals?

00:42:16 So there’s a genotype, which is that encoding

00:42:19 and then a decoding mechanism gives you the phenotype,

00:42:23 which is the actual individual that then performs the task

00:42:26 and in an environment can be evaluated how good it is.

00:42:31 So even that mapping is a big question

00:42:33 and how do you do it?

00:42:34 But typically the representations are,

00:42:37 either they are strings of numbers

00:42:38 or they are some kind of trees.

00:42:39 Those are something that we know very well

00:42:41 in computer science and we try to do that.

00:42:43 But they, and DNA in some sense is also a sequence

00:42:48 and it’s a string.

00:42:50 So it’s not that far from it,

00:42:52 but DNA also has many other aspects

00:42:54 that we don’t take into account necessarily

00:42:56 like there’s folding and interactions

00:43:00 that are other than just the sequence itself.

00:43:03 And lots of that is not yet captured

00:43:06 and we don’t know whether they are really crucial.

00:43:10 Evolution, biological evolution has produced

00:43:12 wonderful things, but if you look at them,

00:43:16 it’s not necessarily the case that every piece

00:43:18 is irreplaceable and essential.

00:43:20 There’s a lot of baggage because you have to construct it

00:43:23 and it has to go through various stages

00:43:25 and we still have appendix and we have tail bones

00:43:29 and things like that that are not really that useful.

00:43:31 If you try to explain them now,

00:43:33 it would make no sense, very hard.

00:43:35 But if you think of us as productive evolution,

00:43:38 you can see where they came from.

00:43:39 They were useful at one point perhaps

00:43:41 and no longer are, but they’re still there.

00:43:43 So that process is complex

00:43:47 and your representation should support it.

00:43:50 And that is quite difficult if we are limited

00:43:56 with strings or trees,

00:43:59 and then we are pretty much limited

00:44:01 what can be constructed.

00:44:03 And one thing that we are still missing

00:44:05 in evolutionary computation in particular

00:44:07 is what we saw in biology, major transitions.

00:44:11 So that you go from, for instance,

00:44:13 single cell to multi cell organisms

00:44:16 and eventually societies.

00:44:17 There are transitions of level of selection

00:44:19 and level of what a unit is.

00:44:22 And that’s something we haven’t captured

00:44:24 in evolutionary computation yet.

00:44:26 Does that require a dramatic expansion

00:44:28 of the representation?

00:44:30 Is that what that is?

00:44:31 Most likely it does, but it’s quite,

00:44:34 we don’t even understand it in biology very well

00:44:36 where it’s coming from.

00:44:37 So it would be really good to look at major transitions

00:44:40 in biology, try to characterize them

00:44:42 a little bit more in detail, what the processes are.

00:44:45 How does a, so like a unit, a cell is no longer

00:44:49 evaluated alone.

00:44:50 It’s evaluated as part of a community,

00:44:52 a multi cell organism.

00:44:54 Even though it could reproduce, now it can’t alone.

00:44:57 It has to have that environment.

00:44:59 So there’s a push to another level, at least a selection.

00:45:03 And how do you make that jump to the next level?

00:45:04 Yes, how do you make the jump?

00:45:06 As part of the algorithm.

00:45:07 Yeah, yeah.

00:45:08 So we haven’t really seen that in computation yet.

00:45:12 And there are certainly attempts to have open ended evolution.

00:45:15 Things that could add more complexity

00:45:18 and start selecting at a higher level.

00:45:20 But it is still not quite the same

00:45:24 as going from single to multi to society,

00:45:27 for instance, in biology.

00:45:29 So there essentially would be,

00:45:31 as opposed to having one agent,

00:45:33 those agent all of a sudden spontaneously decide

00:45:36 to then be together.

00:45:38 And then your entire system would then be treating them

00:45:42 as one agent.

00:45:43 Something like that.

00:45:44 Some kind of weird merger building.

00:45:46 But also, so you mentioned,

00:45:47 I think you mentioned selection.

00:45:49 So basically there’s an agent and they don’t get to live on

00:45:53 if they don’t do well.

00:45:54 So there’s some kind of measure of what doing well is

00:45:56 and isn’t.

00:45:57 And does mutation come into play at all in the process

00:46:02 and what in the world does it serve?

00:46:04 Yeah, so, and again, back to what the computational

00:46:07 mechanisms of evolution computation are.

00:46:08 So the way to create variation,

00:46:12 you can take multiple individuals, two usually,

00:46:15 but you could do more.

00:46:17 And you exchange the parts of the representation.

00:46:20 You do some kind of recombination.

00:46:22 Could be crossover, for instance.

00:46:25 In biology, you do have DNA strings that are cut

00:46:30 and put together again.

00:46:32 We could do something like that.

00:46:34 And it seems to be that in biology, the crossover

00:46:37 is really the workhorse in biological evolution.

00:46:42 In computation, we tend to rely more on mutation.

00:46:47 And that is making random changes

00:46:50 into parts of the chromosome.

00:46:51 You can try to be intelligent and target certain areas

00:46:55 of it and make the mutations also follow some principle.

00:47:00 Like you collect statistics of performance and correlations

00:47:03 and try to make mutations you believe

00:47:05 are going to be helpful.

00:47:06 That’s where evolution computation has moved

00:47:09 in the last 20 years.

00:47:11 I mean, evolution computation has been around for 50 years,

00:47:12 but a lot of the recent…

00:47:15 Success comes from mutation.

00:47:16 Yes, comes from using statistics.

00:47:19 It’s like the rest of machine learning based on statistics.

00:47:22 We use similar tools to guide evolution computation.

00:47:25 And in that sense, it has diverged a bit

00:47:27 from biological evolution.

00:47:30 And that’s one of the things I think we could look at again,

00:47:33 having a weaker selection, more crossover,

00:47:37 large populations, more time,

00:47:40 and maybe a different kind of creativity

00:47:42 would come out of it.

00:47:43 We are very impatient in evolution computation today.

00:47:46 We want answers right now, right, quickly.

00:47:48 And if somebody doesn’t perform, kill it.

00:47:51 And biological evolution doesn’t work quite that way.

00:47:55 And it’s more patient.

00:47:57 Yes, much more patient.

00:48:00 So I guess we need to add some kind of mating,

00:48:03 some kind of like dating mechanisms,

00:48:05 like marriage maybe in there.

00:48:07 So into our algorithms to improve the combination

00:48:13 as opposed to all mutation doing all of the work.

00:48:15 Yeah, and many ways of being successful.

00:48:18 Usually in evolution computation, we have one goal,

00:48:21 play this game really well compared to others.

00:48:25 But in biology, there are many ways of being successful.

00:48:28 You can build niches.

00:48:29 You can be stronger, faster, larger, or smarter,

00:48:34 or eat this or eat that.

00:48:36 So there are many ways to solve the same problem of survival.

00:48:40 And that then breeds creativity.

00:48:43 And it allows more exploration.

00:48:46 And eventually you get solutions

00:48:48 that are perhaps more creative

00:48:51 rather than trying to go from initial population directly

00:48:54 or more or less directly to your maximum fitness,

00:48:57 which you measure as just one metric.

00:49:00 So in a broad sense, before we talk about neuroevolution,

00:49:07 do you see evolutionary computation

00:49:11 as more effective than deep learning in a certain context?

00:49:14 Machine learning, broadly speaking.

00:49:16 Maybe even supervised machine learning.

00:49:18 I don’t know if you want to draw any kind of lines

00:49:21 and distinctions and borders

00:49:23 where they rub up against each other kind of thing,

00:49:25 where one is more effective than the other

00:49:27 in the current state of things.

00:49:28 Yes, of course, they are very different

00:49:30 and they address different kinds of problems.

00:49:32 And the deep learning has been really successful

00:49:36 in domains where we have a lot of data.

00:49:39 And that means not just data about situations,

00:49:42 but also what the right answers were.

00:49:45 So labeled examples, or they might be predictions,

00:49:47 maybe weather prediction where the data itself becomes labels.

00:49:51 What happened, what the weather was today

00:49:53 and what it will be tomorrow.

00:49:57 So they are very effective deep learning methods

00:49:59 on that kind of tasks.

00:50:01 But there are other kinds of tasks

00:50:03 where we don’t really know what the right answer is.

00:50:06 Game playing, for instance,

00:50:07 but many robotics tasks and actions in the world,

00:50:12 decision making and actual practical applications,

00:50:17 like treatments and healthcare

00:50:19 or investment in stock market.

00:50:21 Many tasks are like that.

00:50:22 We don’t know and we’ll never know

00:50:24 what the optimal answers were.

00:50:26 And there you need different kinds of approach.

00:50:28 Reinforcement learning is one of those.

00:50:30 Reinforcement learning comes from biology as well.

00:50:33 Agents learn during their lifetime.

00:50:35 They eat berries and sometimes they get sick

00:50:37 and then they don’t and get stronger.

00:50:40 And then that’s how you learn.

00:50:42 And evolution is also a mechanism like that

00:50:46 at a different timescale because you have a population,

00:50:48 not an individual during his lifetime,

00:50:50 but an entire population as a whole

00:50:52 can discover what works.

00:50:55 And there you can afford individuals that don’t work out.

00:50:58 They will, you know, everybody dies

00:51:00 and you have a next generation

00:51:02 and they will be better than the previous one.

00:51:04 So that’s the big difference between these methods.

00:51:07 They apply to different kinds of problems.

00:51:10 And in particular, there’s often a comparison

00:51:15 that’s kind of interesting and important

00:51:16 between reinforcement learning and evolutionary computation.

00:51:20 And initially, reinforcement learning

00:51:23 was about individual learning during their lifetime.

00:51:25 And evolution is more engineering.

00:51:28 You don’t care about the lifetime.

00:51:29 You don’t care about all the individuals that are tested.

00:51:32 You only care about the final result.

00:51:34 The last one, the best candidate that evolution produced.

00:51:39 In that sense, they also apply to different kinds of problems.

00:51:42 And now that boundary is starting to blur a bit.

00:51:46 You can use evolution as an online method

00:51:48 and reinforcement learning to create engineering solutions,

00:51:51 but that’s still roughly the distinction.

00:51:55 And from the point of view of what algorithm you wanna use,

00:52:00 if you have something where there is a cost for every trial,

00:52:03 reinforcement learning might be your choice.

00:52:06 Now, if you have a domain

00:52:07 where you can use a surrogate perhaps,

00:52:10 so you don’t have much of a cost for trial,

00:52:13 and you want to have surprises,

00:52:16 you want to explore more broadly,

00:52:18 then this population based method is perhaps a better choice

00:52:23 because you can try things out that you wouldn’t afford

00:52:27 when you’re doing reinforcement learning.

00:52:28 There’s very few things as entertaining

00:52:31 as watching either evolutionary computation

00:52:33 or reinforcement learning teaching a simulated robot to walk.

00:52:37 Maybe there’s a higher level question

00:52:42 that could be asked here,

00:52:43 but do you find this whole space of applications

00:52:47 in the robotics interesting for evolution computation?

00:52:51 Yeah, yeah, very much.

00:52:53 And indeed, there are fascinating videos of that.

00:52:56 And that’s actually one of the examples

00:52:58 where you can contrast the difference.

00:53:00 Between reinforcement learning and evolution.

00:53:03 Yes, so if you have a reinforcement learning agent,

00:53:06 it tries to be conservative

00:53:07 because it wants to walk as long as possible and be stable.

00:53:11 But if you have evolutionary computation,

00:53:13 it can afford these agents that go haywire.

00:53:17 They fall flat on their face and they could take a step

00:53:20 and then they jump and then again fall flat.

00:53:23 And eventually what comes out of that

00:53:25 is something like a falling that’s controlled.

00:53:29 You take another step and another step

00:53:30 and you no longer fall.

00:53:32 Instead you run, you go fast.

00:53:34 So that’s a way of discovering something

00:53:36 that’s hard to discover step by step incrementally.

00:53:39 Because you can afford these evolutionist dead ends,

00:53:43 although they are not entirely dead ends

00:53:45 in the sense that they can serve as stepping stones.

00:53:47 When you take two of those, put them together,

00:53:49 you get something that works even better.

00:53:52 And that is a great example of this kind of discovery.

00:53:55 Yeah, learning to walk is fascinating.

00:53:58 I talked quite a bit to Russ Tedrick who’s at MIT.

00:54:01 There’s a community of folks

00:54:03 who just roboticists who love the elegance

00:54:06 and beauty of movement.

00:54:09 And walking bipedal robotics is beautiful,

00:54:17 but also exceptionally dangerous

00:54:19 in the sense that like you’re constantly falling essentially

00:54:22 if you want to do elegant movement.

00:54:25 And the discovery of that is,

00:54:28 I mean, it’s such a good example

00:54:33 of that the discovery of a good solution

00:54:37 sometimes requires a leap of faith and patience

00:54:39 and all those kinds of things.

00:54:41 I wonder what other spaces

00:54:43 where you have to discover those kinds of things in.

00:54:46 Yeah, another interesting direction

00:54:48 is learning for virtual creatures, learning to walk.

00:54:53 We did a study in simulation, obviously,

00:54:57 that you create those creatures,

00:55:00 not just their controller, but also their body.

00:55:02 So you have cylinders, you have muscles,

00:55:05 you have joints and sensors,

00:55:08 and you’re creating creatures that look quite different.

00:55:11 Some of them have multiple legs.

00:55:13 Some of them have no legs at all.

00:55:15 And then the goal was to get them to move, to walk, to run.

00:55:19 And what was interesting is that

00:55:22 when you evolve the controller together with the body,

00:55:26 you get movements that look natural

00:55:28 because they’re optimized for that physical setup.

00:55:31 And these creatures, you start believing them

00:55:33 that they’re alive because they walk in a way

00:55:35 that you would expect somebody

00:55:37 with that kind of a setup to walk.

00:55:39 Yeah, there’s something subjective also about that, right?

00:55:43 I’ve been thinking a lot about that,

00:55:45 especially in the human robot interaction context.

00:55:50 You know, I mentioned Spot, the Boston Dynamics robot.

00:55:55 There is something about human robot communication.

00:55:58 Let’s say, let’s put it in another context,

00:56:00 something about human and dog context,

00:56:05 like a living dog,

00:56:07 where there’s a dance of communication.

00:56:10 First of all, the eyes, you both look at the same thing

00:56:12 and the dogs communicate with their eyes as well.

00:56:15 Like if you’re a human,

00:56:18 if you and a dog want to deal with a particular object,

00:56:24 you will look at the person,

00:56:26 the dog will look at you and then look at the object

00:56:28 and look back at you, all those kinds of things.

00:56:30 But there’s also just the elegance of movement.

00:56:33 I mean, there’s the, of course, the tail

00:56:35 and all those kinds of mechanisms of communication

00:56:38 and it all seems natural and often joyful.

00:56:41 And for robots to communicate that,

00:56:45 it’s really difficult how to figure that out

00:56:47 because it’s almost seems impossible to hard code in.

00:56:50 You can hard code it for demo purpose or something like that,

00:56:54 but it’s essentially choreographed.

00:56:58 Like if you watch some of the Boston Dynamics videos

00:57:00 where they’re dancing,

00:57:01 all of that is choreographed by human beings.

00:57:05 But to learn how to, with your movement,

00:57:09 demonstrate a naturalness and elegance, that’s fascinating.

00:57:14 Of course, in the physical space,

00:57:15 that’s very difficult to do to learn the kind of scale

00:57:18 that you’re referring to,

00:57:20 but the hope is that you could do that in simulation

00:57:23 and then transfer it into the physical space

00:57:25 if you’re able to model the robot sufficiently naturally.

00:57:28 Yeah, and sometimes I think that that requires

00:57:31 a theory of mind on the side of the robot

00:57:35 that they understand what you’re doing

00:57:38 because they themselves are doing something similar.

00:57:41 And that’s a big question too.

00:57:44 We talked about intelligence in general

00:57:47 and the social aspect of intelligence.

00:57:50 And I think that’s what is required

00:57:52 that we humans understand other humans

00:57:53 because we assume that they are similar to us.

00:57:57 We have one simulation we did a while ago.

00:57:59 Ken Stanley did that.

00:58:01 Two robots that were competing simulation, like I said,

00:58:06 they were foraging for food to gain energy.

00:58:09 And then when they were really strong,

00:58:10 they would bounce into the other robot

00:58:12 and win if they were stronger.

00:58:14 And we watched evolution discover

00:58:17 more and more complex behaviors.

00:58:18 They first went to the nearest food

00:58:21 and then they started to plot a trajectory

00:58:24 so they get more, but then they started to pay attention

00:58:28 what the other robot was doing.

00:58:30 And in the end, there was a behavior

00:58:32 where one of the robots, the most sophisticated one,

00:58:37 sensed where the food pieces were

00:58:40 and identified that the other robot

00:58:42 was close to two of a very far distance

00:58:46 and there was one more food nearby.

00:58:48 So it faked, now I’m using anthropomorphizing terms,

00:58:53 but it made a move towards those other pieces

00:58:55 in order for the other robot to actually go and get them

00:58:59 because it knew that the last remaining piece of food

00:59:02 was close and the other robot would have to travel

00:59:04 a long way, lose its energy

00:59:06 and then lose the whole competition.

00:59:10 So there was like emergence of something

00:59:12 like a theory of mind,

00:59:13 knowing what the other robot would do,

00:59:16 to guide it towards bad behavior in order to win.

00:59:19 So we can get things like that happen in simulation as well.

00:59:22 But that’s a complete natural emergence

00:59:25 of a theory of mind.

00:59:26 But I feel like if you add a little bit of a place

00:59:30 for a theory of mind to emerge like easier,

00:59:34 then you can go really far.

00:59:37 I mean, some of these things with evolution, you know,

00:59:41 you add a little bit of design in there, it’ll really help.

00:59:45 And I tend to think that a very simple theory of mind

00:59:50 will go a really long way for cooperation between agents

00:59:54 and certainly for human robot interaction.

00:59:57 Like it doesn’t have to be super complicated.

01:00:01 I’ve gotten a chance in the autonomous vehicle space

01:00:03 to watch vehicles interact with pedestrians

01:00:07 or pedestrians interacting with vehicles in general.

01:00:09 I mean, you would think that there’s a very complicated

01:00:13 theory of mind thing going on, but I have a sense,

01:00:15 it’s not well understood yet,

01:00:17 but I have a sense it’s pretty dumb.

01:00:19 Like it’s pretty simple.

01:00:22 There’s a social contract there between humans,

01:00:25 a human driver and a human crossing the road

01:00:28 where the human crossing the road trusts

01:00:32 that the human in the car is not going to murder them.

01:00:34 And there’s something about, again,

01:00:36 back to that mortality thing.

01:00:38 There’s some dance of ethics and morality that’s built in,

01:00:45 that you’re mapping your own morality

01:00:47 onto the person in the car.

01:00:50 And even if they’re driving at a speed where you think

01:00:54 if they don’t stop, they’re going to kill you,

01:00:56 you trust that if you step in front of them,

01:00:58 they’re going to hit the brakes.

01:00:59 And there’s that weird dance that we do

01:01:02 that I think is a pretty simple model,

01:01:04 but of course it’s very difficult to introspect what it is.

01:01:08 And autonomous robots in the human robot interaction

01:01:11 context have to build that.

01:01:13 Current robots are much less than what you’re describing.

01:01:17 They’re currently just afraid of everything.

01:01:19 They’re more, they’re not the kind that fall

01:01:22 and discover how to run.

01:01:24 They’re more like, please don’t touch anything.

01:01:26 Don’t hurt anything.

01:01:28 Stay as far away from humans as possible.

01:01:30 Treat humans as ballistic objects that you can’t,

01:01:34 that you do with a large spatial envelope,

01:01:38 make sure you do not collide with.

01:01:40 That’s how, like you mentioned,

01:01:42 Elon Musk thinks about autonomous vehicles.

01:01:45 I tend to think autonomous vehicles need to have

01:01:48 a beautiful dance between human and machine,

01:01:50 where it’s not just the collision avoidance problem,

01:01:53 but a weird dance.

01:01:55 Yeah, I think these systems need to be able to predict

01:02:00 what will happen, what the other agent is going to do,

01:02:02 and then have a structure of what the goals are

01:02:06 and whether those predictions actually meet the goals.

01:02:08 And you can go probably pretty far

01:02:10 with that relatively simple setup already,

01:02:13 but to call it a theory of mind, I don’t think you need to.

01:02:16 I mean, it doesn’t matter whether the pedestrian

01:02:18 has a mind, it’s an object,

01:02:20 and we can predict what we will do.

01:02:21 And then we can predict what the states will be

01:02:23 in the future and whether they are desirable states.

01:02:26 Stay away from those that are undesirable

01:02:27 and go towards those that are desirable.

01:02:29 So it’s a relatively simple functional approach to that.

01:02:34 Where do we really need the theory of mind?

01:02:37 Maybe when you start interacting

01:02:40 and you’re trying to get the other agent to do something

01:02:44 and jointly, so that you can jointly,

01:02:46 collaboratively achieve something,

01:02:48 then it becomes more complex.

01:02:50 Well, I mean, even with the pedestrians,

01:02:51 you have to have a sense of where their attention,

01:02:54 actual attention in terms of their gaze is,

01:02:57 but also there’s this vision science,

01:03:00 people talk about this all the time.

01:03:01 Just because I’m looking at it

01:03:02 doesn’t mean I’m paying attention to it.

01:03:04 So figuring out what is the person looking at?

01:03:07 What is the sensory information they’ve taken in?

01:03:09 And the theory of mind piece comes in is

01:03:12 what are they actually attending to cognitively?

01:03:16 And also what are they thinking about?

01:03:19 Like what is the computation they’re performing?

01:03:21 And you have probably maybe a few options

01:03:24 for the pedestrian crossing.

01:03:28 It doesn’t have to be,

01:03:29 it’s like a variable with a few discrete states,

01:03:31 but you have to have a good estimation

01:03:33 which of the states that brain is in

01:03:35 for the pedestrian case.

01:03:36 And the same is for attending with a robot.

01:03:39 If you’re collaborating to pick up an object,

01:03:42 you have to figure out is the human,

01:03:44 like there’s a few discrete states

01:03:47 that the human could be in.

01:03:48 You have to predict that by observing the human.

01:03:52 And that seems like a machine learning problem

01:03:54 to figure out what’s the human up to.

01:03:59 It’s not as simple as sort of planning

01:04:02 just because they move their arm

01:04:03 means the arm will continue moving in this direction.

01:04:06 You have to really have a model

01:04:08 of what they’re thinking about

01:04:09 and what’s the motivation behind the movement of the arm.

01:04:12 Here we are talking about relatively simple physical actions,

01:04:16 but you can take that the higher levels also

01:04:19 like to predict what the people are going to do,

01:04:21 you need to know what their goals are.

01:04:26 What are they trying to, are they exercising?

01:04:27 Are they just starting to get somewhere?

01:04:29 But even higher level, I mean,

01:04:30 you are predicting what people will do in their career,

01:04:33 what their life themes are.

01:04:35 Do they want to be famous, rich, or do good?

01:04:37 And that takes a lot more information,

01:04:40 but it allows you to then predict their actions,

01:04:43 what choices they might make.

01:04:45 So how does evolution and computation apply

01:04:49 to the world of neural networks?

01:04:50 I’ve seen quite a bit of work from you and others

01:04:53 in the world of neural evolution.

01:04:55 So maybe first, can you say, what is this field?

01:04:58 Yeah, neural evolution is a combination of neural networks

01:05:02 and evolution computation in many different forms,

01:05:05 but the early versions were simply using evolution

01:05:11 as a way to construct a neural network

01:05:13 instead of say, stochastic gradient descent

01:05:17 or backpropagation.

01:05:18 Because evolution can evolve these parameters,

01:05:21 weight values in a neural network,

01:05:22 just like any other string of numbers, you can do that.

01:05:26 And that’s useful because some cases you don’t have

01:05:29 those targets that you need to backpropagate from.

01:05:33 And it might be an agent that’s running a maze

01:05:35 or a robot playing a game or something.

01:05:38 You don’t, again, you don’t know what the right answers are,

01:05:41 you don’t have backprop,

01:05:42 but this way you can still evolve a neural net.

01:05:44 And neural networks are really good at these tasks,

01:05:47 because they recognize patterns

01:05:49 and they generalize, interpolate between known situations.

01:05:53 So you want to have a neural network in such a task,

01:05:56 even if you don’t have a supervised targets.

01:05:59 So that’s a reason and that’s a solution.

01:06:01 And also more recently,

01:06:02 now when we have all this deep learning literature,

01:06:05 it turns out that we can use evolution

01:06:07 to optimize many aspects of those designs.

01:06:11 The deep learning architectures have become so complex

01:06:14 that there’s little hope for us little humans

01:06:17 to understand their complexity

01:06:18 and what actually makes a good design.

01:06:21 And now we can use evolution to give that design for you.

01:06:24 And it might mean optimizing hyperparameters,

01:06:28 like the depth of layers and so on,

01:06:30 or the topology of the network,

01:06:33 how many layers, how they’re connected,

01:06:35 but also other aspects like what activation functions

01:06:37 you use where in the network during the learning process,

01:06:40 or what loss function you use,

01:06:42 you could generalize that.

01:06:43 You could generate that, even data augmentation,

01:06:47 all the different aspects of the design

01:06:49 of deep learning experiments could be optimized that way.

01:06:53 So that’s an interaction between two mechanisms.

01:06:56 But there’s also, when we get more into cognitive science

01:07:00 and the topics that we’ve been talking about,

01:07:02 you could have learning mechanisms

01:07:04 at two level timescales.

01:07:06 So you do have an evolution

01:07:07 that gives you baby neural networks

01:07:10 that then learn during their lifetime.

01:07:12 And you have this interaction of two timescales.

01:07:15 And I think that can potentially be really powerful.

01:07:19 Now, in biology, we are not born with all our faculties.

01:07:23 We have to learn, we have a developmental period.

01:07:25 In humans, it’s really long and most animals have something.

01:07:29 And probably the reason is that evolution of DNA

01:07:32 is not detailed enough or plentiful enough to describe them.

01:07:36 We can describe how to set the brain up,

01:07:38 but we can, evolution can decide on a starting point

01:07:44 and then have a learning algorithm

01:07:46 that will construct the final product.

01:07:48 And this interaction of intelligent, well,

01:07:54 evolution that has produced a good starting point

01:07:56 for the specific purpose of learning from it

01:07:59 with the interaction with the environment,

01:08:02 that can be a really powerful mechanism

01:08:03 for constructing brains and constructing behaviors.

01:08:06 I like how you walk back from intelligence.

01:08:10 So optimize starting point, maybe.

01:08:12 Yeah, okay, there’s a lot of fascinating things to ask here.

01:08:18 And this is basically this dance between neural networks

01:08:22 and evolution and computation

01:08:23 could go into the category of automated machine learning

01:08:26 to where you’re optimizing,

01:08:28 whether it’s hyperparameters of the topology

01:08:31 or hyperparameters taken broadly.

01:08:34 But the topology thing is really interesting.

01:08:36 I mean, that’s not really done that effectively

01:08:40 or throughout the history of machine learning

01:08:41 has not been done.

01:08:43 Usually there’s a fixed architecture.

01:08:45 Maybe there’s a few components you’re playing with,

01:08:47 but to grow a neural network, essentially,

01:08:50 the way you grow an organism is really fascinating space.

01:08:52 How hard is it, do you think, to grow a neural network?

01:08:58 And maybe what kind of neural networks

01:09:00 are more amenable to this kind of idea than others?

01:09:04 I’ve seen quite a bit of work on recurrent neural networks.

01:09:06 Is there some architectures that are friendlier than others?

01:09:10 And is this just a fun, small scale set of experiments

01:09:15 or do you have hope that we can be able to grow

01:09:18 powerful neural networks?

01:09:20 I think we can.

01:09:21 And most of the work up to now

01:09:24 is taking architectures that already exist

01:09:27 that humans have designed and try to optimize them further.

01:09:30 And you can totally do that.

01:09:32 A few years ago, we did an experiment.

01:09:34 We took a winner of the image captioning competition

01:09:39 and the architecture and just broke it into pieces

01:09:42 and took the pieces.

01:09:43 And that was our search base.

01:09:45 See if you can do better.

01:09:46 And we indeed could, 15% better performance

01:09:49 by just searching around the network design

01:09:52 that humans had come up with,

01:09:53 Oreo vinyls and others.

01:09:56 So, but that’s starting from a point

01:09:59 that humans have produced,

01:10:00 but we could do something more general.

01:10:03 It doesn’t have to be that kind of network.

01:10:05 The hard part is, there are a couple of challenges.

01:10:08 One of them is to define the search base.

01:10:10 What are your elements and how you put them together.

01:10:14 And the space is just really, really big.

01:10:18 So you have to somehow constrain it

01:10:21 and have some hunch what will work

01:10:23 because otherwise everything is possible.

01:10:25 And another challenge is that in order to evaluate

01:10:28 how good your design is, you have to train it.

01:10:32 I mean, you have to actually try it out.

01:10:34 And that’s currently very expensive, right?

01:10:37 I mean, deep learning networks may take days to train

01:10:40 while imagine you having a population of a hundred

01:10:42 and have to run it for a hundred generations.

01:10:44 It’s not yet quite feasible computationally.

01:10:48 It will be, but also there’s a large carbon footprint

01:10:51 and all that.

01:10:52 I mean, we are using a lot of computation for doing it.

01:10:54 So intelligent methods and intelligent,

01:10:57 I mean, we have to do some science

01:11:00 in order to figure out what the right representations are

01:11:03 and right operators are, and how do we evaluate them

01:11:07 without having to fully train them.

01:11:09 And that is where the current research is

01:11:11 and we’re making progress on all those fronts.

01:11:14 So yes, there are certain architectures

01:11:17 that are more amenable to that approach,

01:11:20 but also I think we can create our own architecture

01:11:23 and all representations that are even better at that.

01:11:26 And do you think it’s possible to do like a tiny baby network

01:11:30 that grows into something that can do state of the art

01:11:32 on like even the simple data set like MNIST,

01:11:35 and just like it just grows into a gigantic monster

01:11:39 that’s the world’s greatest handwriting recognition system?

01:11:42 Yeah, there are approaches like that.

01:11:44 Esteban Real and Cochlear for instance,

01:11:45 I worked on evolving a smaller network

01:11:48 and then systematically expanding it to a larger one.

01:11:51 Your elements are already there and scaling it up

01:11:54 will just give you more power.

01:11:56 So again, evolution gives you that starting point

01:11:59 and then there’s a mechanism that gives you the final result

01:12:02 and a very powerful approach.

01:12:05 But you could also simulate the actual growth process.

01:12:12 And like I said before, evolving a starting point

01:12:15 and then evolving or training the network,

01:12:18 there’s not that much work that’s been done on that yet.

01:12:21 We need some kind of a simulation environment

01:12:24 so the interactions at will,

01:12:27 the supervised environment doesn’t really,

01:12:29 it’s not as easily usable here.

01:12:33 Sorry, the interaction between neural networks?

01:12:35 Yeah, the neural networks that you’re creating,

01:12:37 interacting with the world

01:12:39 and learning from these sequences of interactions,

01:12:43 perhaps communication with others.

01:12:46 That’s awesome.

01:12:47 We would like to get there,

01:12:48 but just the task of simulating something

01:12:51 is at that level is very hard.

01:12:53 It’s very difficult.

01:12:54 I love the idea.

01:12:55 I mean, one of the powerful things about evolution

01:12:58 on Earth is the predators and prey emerged.

01:13:01 And like there’s just like,

01:13:03 there’s bigger fish and smaller fish

01:13:05 and it’s fascinating to think

01:13:07 that you could have neural networks competing

01:13:08 against each other in one neural network

01:13:10 being able to destroy another one.

01:13:12 There’s like wars of neural networks competing

01:13:14 to solve the MNIST problem, I don’t know.

01:13:16 Yeah, yeah.

01:13:17 Oh, totally, yeah, yeah, yeah.

01:13:19 And we actually simulated also that prey

01:13:22 and it was interesting what happened there,

01:13:25 Padmini Rajagopalan did this

01:13:26 and Kay Holkamp was a zoologist.

01:13:29 So we had, again,

01:13:33 we had simulated hyenas, simulated zebras.

01:13:37 Nice.

01:13:38 And initially, the hyenas just tried to hunt them

01:13:42 and when they actually stumbled upon the zebra,

01:13:45 they ate it and were happy.

01:13:47 And then the zebras learned to escape

01:13:51 and the hyenas learned to team up.

01:13:54 And actually two of them approached

01:13:55 in different directions.

01:13:56 And now the zebras, their next step,

01:13:59 they generated a behavior where they split

01:14:02 in different directions,

01:14:03 just like actually gazelles do

01:14:07 when they are being hunted.

01:14:08 They confuse the predator

01:14:09 by going in different directions.

01:14:10 That emerged and then more hyenas joined

01:14:14 and kind of circled them.

01:14:16 And then when they circled them,

01:14:18 they could actually herd the zebras together

01:14:21 and eat multiple zebras.

01:14:23 So there was like an arms race of predators and prey.

01:14:28 And they gradually developed more complex behaviors,

01:14:31 some of which we actually do see in nature.

01:14:33 And this kind of coevolution,

01:14:36 that’s competitive coevolution,

01:14:38 it’s a fascinating topic

01:14:39 because there’s a promise or possibility

01:14:42 that you will discover something new

01:14:45 that you don’t already know.

01:14:46 You didn’t build it in.

01:14:48 It came from this arms race.

01:14:50 It’s hard to keep the arms race going.

01:14:52 It’s hard to have rich enough simulation

01:14:55 that supports all of these complex behaviors.

01:14:58 But at least for several steps,

01:15:00 we’ve already seen it in this predator prey scenario, yeah.

01:15:03 First of all, it’s fascinating to think about this context

01:15:06 in terms of evolving architectures.

01:15:09 So I’ve studied Tesla autopilot for a long time.

01:15:12 It’s one particular implementation of an AI system

01:15:17 that’s operating in the real world.

01:15:18 I find it fascinating because of the scale

01:15:20 at which it’s used out in the real world.

01:15:23 And I’m not sure if you’re familiar with that system much,

01:15:26 but, you know, Andre Kapathy leads that team

01:15:28 on the machine learning side.

01:15:30 And there’s a multitask network, multiheaded network,

01:15:34 where there’s a core, but it’s trained on particular tasks.

01:15:38 And there’s a bunch of different heads

01:15:40 that are trained on that.

01:15:41 Is there some lessons from evolutionary computation

01:15:46 or neuroevolution that could be applied

01:15:48 to this kind of multiheaded beast

01:15:50 that’s operating in the real world?

01:15:52 Yes, it’s a very good problem for neuroevolution.

01:15:56 And the reason is that when you have multiple tasks,

01:16:00 they support each other.

01:16:02 So let’s say you’re learning to classify X ray images

01:16:08 to different pathologies.

01:16:09 So you have one task is to classify this disease

01:16:13 and another one, this disease, another one, this one.

01:16:15 And when you’re learning from one disease,

01:16:19 that forces certain kinds of internal representations

01:16:21 and embeddings, and they can serve

01:16:24 as a helpful starting point for the other tasks.

01:16:27 So you are combining the wisdom of multiple tasks

01:16:30 into these representations.

01:16:32 And it turns out that you can do better

01:16:34 in each of these tasks

01:16:35 when you are learning simultaneously other tasks

01:16:38 than you would by one task alone.

01:16:39 Which is a fascinating idea in itself, yeah.

01:16:41 Yes, and people do that all the time.

01:16:43 I mean, you use knowledge of domains that you know

01:16:46 in new domains, and certainly neural network can do that.

01:16:49 When neuroevolution comes in is that,

01:16:52 what’s the best way to combine these tasks?

01:16:55 Now there’s architectural design that allow you to decide

01:16:58 where and how the embeddings,

01:17:01 the internal representations are combined

01:17:03 and how much you combine them.

01:17:05 And there’s quite a bit of research on that.

01:17:08 And my team, Elliot Meyerson has worked on that

01:17:11 in particular, like what is a good internal representation

01:17:14 that supports multiple tasks?

01:17:17 And we’re getting to understand how that’s constructed

01:17:20 and what’s in it, so that it is in a space

01:17:24 that supports multiple different heads, like you said.

01:17:28 And that I think is fundamentally

01:17:31 how biological intelligence works as well.

01:17:34 You don’t build a representation just for one task.

01:17:38 You try to build something that’s general,

01:17:40 not only so that you can do better in one task

01:17:42 or multiple tasks, but also future tasks

01:17:45 and future challenges.

01:17:46 So you learn the structure of the world

01:17:50 and that helps you in all kinds of future challenges.

01:17:54 And so you’re trying to design a representation

01:17:56 that will support an arbitrary set of tasks

01:17:58 in a particular sort of class of problem.

01:18:01 Yeah, and also it turns out,

01:18:03 and that’s again, a surprise that Elliot found

01:18:05 was that those tasks don’t have to be very related.

01:18:10 You know, you can learn to do better vision

01:18:12 by learning language or better language

01:18:15 by learning about DNA structure.

01:18:17 No, somehow the world.

01:18:20 Yeah, it rhymes.

01:18:23 The world rhymes, even if it’s very disparate fields.

01:18:29 I mean, on that small topic, let me ask you,

01:18:31 because you’ve also on the competition neuroscience side,

01:18:36 you worked on both language and vision.

01:18:41 What’s the connection between the two?

01:18:44 What’s more, maybe there’s a bunch of ways to ask this,

01:18:46 but what’s more difficult to build

01:18:48 from an engineering perspective

01:18:50 and evolutionary perspective,

01:18:52 the human language system or the human vision system

01:18:56 or the equivalent of in the AI space language and vision,

01:19:00 or is it the best as the multitask idea

01:19:03 that you’re speaking to

01:19:04 that they need to be deeply integrated?

01:19:07 Yeah, absolutely the latter.

01:19:09 Learning both at the same time,

01:19:11 I think is a fascinating direction in the future.

01:19:15 So we have data sets where there’s visual component

01:19:17 as well as verbal descriptions, for instance,

01:19:20 and that way you can learn a deeper representation,

01:19:22 a more useful representation for both.

01:19:25 But it’s still an interesting question

01:19:26 of which one is easier.

01:19:29 I mean, recognizing objects

01:19:31 or even understanding sentences, that’s relatively possible,

01:19:35 but where it becomes, where the challenges are

01:19:37 is to understand the world.

01:19:39 Like the visual world, the 3D,

01:19:42 what are the objects doing

01:19:43 and predicting what will happen, the relationships.

01:19:46 That’s what makes vision difficult.

01:19:48 And language, obviously it’s what is being said,

01:19:51 what the meaning is.

01:19:52 And the meaning doesn’t stop at who did what to whom.

01:19:57 There are goals and plans and themes,

01:19:59 and you eventually have to understand

01:20:01 the entire human society and history

01:20:04 in order to understand the sentence very much fully.

01:20:07 There are plenty of examples of those kinds

01:20:09 of short sentences when you bring in

01:20:11 all the world knowledge to understand it.

01:20:14 And that’s the big challenge.

01:20:15 Now we are far from that,

01:20:17 but even just bringing in the visual world

01:20:20 together with the sentence will give you already

01:20:24 a lot deeper understanding of what’s happening.

01:20:26 And I think that that’s where we’re going very soon.

01:20:29 I mean, we’ve had ImageNet for a long time,

01:20:32 and now we have all these text collections,

01:20:36 but having both together and then learning

01:20:40 a semantic understanding of what is happening,

01:20:42 I think that that will be the next step

01:20:44 in the next few years.

01:20:45 Yeah, you’re starting to see that

01:20:46 with all the work with Transformers,

01:20:47 was the community, the AI community

01:20:50 starting to dip their toe into this idea

01:20:53 of having language models that are now doing stuff

01:20:59 with images, with vision, and then connecting the two.

01:21:03 I mean, right now it’s like these little explorations

01:21:05 we’re literally dipping the toe in,

01:21:07 but maybe at some point we’ll just dive into the pool

01:21:11 and it’ll just be all seen as the same thing.

01:21:13 I do still wonder what’s more fundamental,

01:21:16 whether vision is, whether we don’t think

01:21:21 about vision correctly.

01:21:23 Maybe the fact, because we’re humans

01:21:24 and we see things as beautiful and so on,

01:21:28 and because we have cameras that are taking pixels

01:21:31 as a 2D image, that we don’t sufficiently think

01:21:35 about vision as language.

01:21:38 Maybe Chomsky is right all along,

01:21:41 that vision is fundamental to,

01:21:43 sorry, that language is fundamental to everything,

01:21:46 to even cognition, to even consciousness.

01:21:49 The base layer is all language,

01:21:51 not necessarily like English, but some weird

01:21:54 abstract representation, linguistic representation.

01:21:59 Yeah, well, earlier we talked about the social structures

01:22:02 and that may be what’s underlying the language,

01:22:05 and that’s the more fundamental part,

01:22:06 and then language has been added on top of that.

01:22:08 Language emerges from the social interaction.

01:22:11 Yeah, that’s a very good guess.

01:22:13 We are visual animals, though.

01:22:15 A lot of the brain is dedicated to vision,

01:22:17 and also, when we think about various abstract concepts,

01:22:22 we usually reduce that to vision and images,

01:22:27 and that’s, you know, we go to a whiteboard,

01:22:29 you draw pictures of very abstract concepts.

01:22:33 So we tend to resort to that quite a bit,

01:22:35 and that’s a fundamental representation.

01:22:37 It’s probably possible that it predated language even.

01:22:41 I mean, animals, a lot of, they don’t talk,

01:22:43 but they certainly do have vision,

01:22:45 and language is interesting development

01:22:49 in from mastication, from eating.

01:22:53 You develop an organ that actually can produce sound

01:22:55 to manipulate them.

01:22:58 Maybe that was an accident.

01:22:59 Maybe that was something that was available

01:23:00 and then allowed us to do the communication,

01:23:05 or maybe it was gestures.

01:23:06 Sign language could have been the original proto language.

01:23:10 We don’t quite know, but the language is more fundamental

01:23:13 than the medium in which it’s communicated,

01:23:16 and I think that it comes from those representations.

01:23:20 Now, in current world, they are so strongly integrated,

01:23:26 it’s really hard to say which one is fundamental.

01:23:28 You look at the brain structures and even visual cortex,

01:23:32 which is supposed to be very much just vision.

01:23:34 Well, if you are thinking of semantic concepts,

01:23:37 you’re thinking of language, visual cortex lights up.

01:23:40 It’s still useful, even for language computations.

01:23:44 So there are common structures underlying them.

01:23:47 So utilize what you need.

01:23:49 And when you are understanding a scene,

01:23:51 you’re understanding relationships.

01:23:53 Well, that’s not so far from understanding relationships

01:23:55 between words and concepts.

01:23:56 So I think that that’s how they are integrated.

01:23:59 Yeah, and there’s dreams, and once we close our eyes,

01:24:02 there’s still a world in there somehow operating

01:24:04 and somehow possibly the visual system somehow integrated

01:24:08 into all of it.

01:24:09 I tend to enjoy thinking about aliens

01:24:12 and thinking about the sad thing to me

01:24:17 about extraterrestrial intelligent life,

01:24:21 that if it visited us here on Earth,

01:24:24 or if we came on Mars or maybe another solar system,

01:24:29 another galaxy one day,

01:24:30 that us humans would not be able to detect it

01:24:34 or communicate with it or appreciate,

01:24:37 like it’d be right in front of our nose

01:24:38 and we were too self obsessed to see it.

01:24:43 Not self obsessed, but our tools,

01:24:48 our frameworks of thinking would not detect it.

01:24:52 As a good movie, Arrival and so on,

01:24:55 where Stephen Wolfram and his son,

01:24:56 I think were part of developing this alien language

01:24:59 of how aliens would communicate with humans.

01:25:01 Do you ever think about that kind of stuff

01:25:02 where if humans and aliens would be able to communicate

01:25:07 with each other, like if we met each other at some,

01:25:11 okay, we could do SETI, which is communicating

01:25:13 from across a very big distance,

01:25:15 but also just us, if you did a podcast with an alien,

01:25:22 do you think we’d be able to find a common language

01:25:25 and a common methodology of communication?

01:25:28 I think from a computational perspective,

01:25:30 the way to ask that is you have very fundamentally

01:25:33 different creatures, agents that are created,

01:25:35 would they be able to find a common language?

01:25:38 Yes, I do think about that.

01:25:40 I mean, I think a lot of people who are in computing,

01:25:42 they, and AI in particular, they got into it

01:25:46 because they were fascinated with science fiction

01:25:48 and all of these options.

01:25:50 I mean, Star Trek generated all kinds of devices

01:25:54 that we have now, they envisioned it first

01:25:56 and it’s a great motivator to think about things like that.

01:26:00 And I, so one, and again, being a computational scientist

01:26:06 and trying to build intelligent agents,

01:26:10 what I would like to do is have a simulation

01:26:13 where the agents actually evolve communication,

01:26:17 not just communication, we’ve done that,

01:26:18 people have done that many times,

01:26:20 that they communicate, they signal and so on,

01:26:22 but actually develop a language.

01:26:24 And language means grammar, it means all these

01:26:26 social structures and on top of that,

01:26:28 grammatical structures.

01:26:30 And we do it under various conditions

01:26:35 and actually try to identify what conditions

01:26:36 are necessary for it to come out.

01:26:39 And then we can start asking that kind of questions.

01:26:43 Are those languages that emerge

01:26:45 in those different simulated environments,

01:26:47 are they understandable to us?

01:26:49 Can we somehow make a translation?

01:26:52 We can make it a concrete question.

01:26:55 So machine translation of evolved languages.

01:26:58 And so like languages that evolve come up with,

01:27:01 can we translate, like I have a Google translate

01:27:04 for the evolved languages.

01:27:07 Yes, and if we do that enough,

01:27:09 we have perhaps an idea what an alien language

01:27:14 might be like, the space of where those languages can be.

01:27:17 Because we can set up their environment differently.

01:27:19 It doesn’t need to be gravity.

01:27:22 You can have all kinds of, societies can be different.

01:27:24 They may have no predators.

01:27:26 They may have all, everybody’s a predator.

01:27:28 All kinds of situations.

01:27:30 And then see what the space possibly is

01:27:32 where those languages are and what the difficulties are.

01:27:35 That’d be really good actually to do that

01:27:37 before the aliens come here.

01:27:39 Yes, it’s good practice.

01:27:41 On the similar connection,

01:27:45 you can think of AI systems as aliens.

01:27:48 Is there ways to evolve a communication scheme

01:27:51 for, there’s a field you can call it explainable AI,

01:27:55 for AI systems to be able to communicate.

01:27:58 So you evolve a bunch of agents,

01:28:01 but for some of them to be able to talk to you also.

01:28:05 So to evolve a way for agents to be able to communicate

01:28:08 about their world to us humans.

01:28:11 Do you think that there’s possible mechanisms

01:28:13 for doing that?

01:28:14 We can certainly try.

01:28:16 And if it’s an evolution competition system,

01:28:20 for instance, you reward those solutions

01:28:22 that are actually functional.

01:28:24 That communication makes sense.

01:28:25 It allows us to together again, achieve common goals.

01:28:29 I think that’s possible.

01:28:30 But even from that paper that you mentioned,

01:28:35 the anecdotes, it’s quite likely also

01:28:37 that the agents learn to lie and fake

01:28:43 and do all kinds of things like that.

01:28:45 I mean, we see that in even very low level,

01:28:47 like bacterial evolution.

01:28:48 There are cheaters.

01:28:51 And who’s to say that what they say

01:28:53 is actually what they think.

01:28:56 But that’s what I’m saying,

01:28:57 that there would have to be some common goal

01:29:00 so that we can evaluate whether that communication

01:29:02 is at least useful.

01:29:05 They may be saying things just to make us feel good

01:29:08 or get us to do what we want,

01:29:10 but they would not turn them off or something.

01:29:12 But so we would have to understand

01:29:15 their internal representations much better

01:29:16 to really make sure that that translation is critical.

01:29:20 But it can be useful.

01:29:21 And I think it’s possible to do that.

01:29:23 There are examples where visualizations

01:29:27 are automatically created

01:29:29 so that we can look into the system

01:29:33 and that language is not that far from it.

01:29:35 I mean, it is a way of communicating and logging

01:29:38 what you’re doing in some interpretable way.

01:29:43 I think a fascinating topic, yeah, to do that.

01:29:45 Yeah, you’re making me realize

01:29:47 that it’s a good scientific question

01:29:51 whether lying is an effective mechanism

01:29:54 for integrating yourself and succeeding

01:29:56 in a social network, in a world that is social.

01:30:00 I tend to believe that honesty and love

01:30:04 are evolutionary advantages in an environment

01:30:09 where there’s a network of intelligent agents.

01:30:12 But it’s also very possible that dishonesty

01:30:14 and manipulation and even violence,

01:30:20 all those kinds of things might be more beneficial.

01:30:23 That’s the old open question about good versus evil.

01:30:25 But I tend to, I mean, I don’t know if it’s a hopeful,

01:30:29 maybe I’m delusional, but it feels like karma is a thing,

01:30:35 which is like long term, the agents,

01:30:39 they’re just kind to others sometimes for no reason

01:30:42 will do better.

01:30:43 In a society that’s not highly constrained on resources.

01:30:48 So like people start getting weird

01:30:49 and evil towards each other and bad

01:30:51 when the resources are very low relative

01:30:54 to the needs of the populace,

01:30:56 especially at the basic level, like survival, shelter,

01:31:01 food, all those kinds of things.

01:31:02 But I tend to believe that once you have

01:31:07 those things established, then, well, not to believe,

01:31:11 I guess I hope that AI systems will be honest.

01:31:14 But it’s scary to think about the Turing test,

01:31:19 AI systems that will eventually pass the Turing test

01:31:23 will be ones that are exceptionally good at lying.

01:31:26 That’s a terrifying concept.

01:31:29 I mean, I don’t know.

01:31:31 First of all, sort of from somebody who studied language

01:31:34 and obviously are not just a world expert in AI,

01:31:37 but somebody who dreams about the future of the field.

01:31:41 Do you hope, do you think there’ll be human level

01:31:45 or superhuman level intelligences in the future

01:31:48 that we eventually build?

01:31:52 Well, I definitely hope that we can get there.

01:31:56 One, I think important perspective

01:31:59 is that we are building AI to help us.

01:32:02 That it is a tool like cars or language

01:32:06 or communication, AI will help us be more productive.

01:32:13 And that is always a condition.

01:32:17 It’s not something that we build and let run

01:32:20 and it becomes an entity of its own

01:32:22 that doesn’t care about us.

01:32:25 Now, of course, really find the future,

01:32:27 maybe that might be possible,

01:32:28 but not in the foreseeable future when we are building it.

01:32:32 And therefore we always in a position of limiting

01:32:35 what it can or cannot do.

01:32:38 And your point about lying is very interesting.

01:32:45 Even in these hyenas societies, for instance,

01:32:49 when a number of these hyenas band together

01:32:52 and they take a risk and steal the kill,

01:32:56 there are always hyenas that hang back

01:32:58 and don’t participate in that risky behavior,

01:33:02 but they walk in later and join the party

01:33:05 after the kill.

01:33:06 And there are even some that may be ineffective

01:33:10 and cause others to have harm.

01:33:12 So, and like I said, even bacteria cheat.

01:33:15 And we see it in biology,

01:33:17 there’s always some element on opportunity.

01:33:20 If you have a society, I think that is just because

01:33:22 if you have a society,

01:33:24 in order for society to be effective,

01:33:26 you have to have this cooperation

01:33:27 and you have to have trust.

01:33:29 And if you have enough of agents

01:33:32 who are able to trust each other,

01:33:33 you can achieve a lot more.

01:33:36 But if you have trust,

01:33:37 you also have opportunity for cheaters and liars.

01:33:40 And I don’t think that’s ever gonna go away.

01:33:43 There will be hopefully a minority

01:33:45 so that they don’t get in the way.

01:33:46 And we studied in these hyena simulations,

01:33:48 like what the proportion needs to be

01:33:50 before it is no longer functional.

01:33:52 And you can point out that you can tolerate

01:33:55 a few cheaters and a few liars

01:33:57 and the society can still function.

01:33:59 And that’s probably going to happen

01:34:02 when we build these systems at Autonomously Learn.

01:34:07 The really successful ones are honest

01:34:09 because that’s the best way of getting things done.

01:34:13 But there probably are also intelligent agents

01:34:15 that find that they can achieve their goals

01:34:17 by bending the rules or cheating.

01:34:20 So that could be a huge benefit

01:34:23 as opposed to having fixed AI systems.

01:34:25 Say we build an AGI system and deploying millions of them,

01:34:29 it’d be that are exactly the same.

01:34:33 There might be a huge benefit to introducing

01:34:37 sort of from like an evolution computation perspective,

01:34:39 a lot of variation.

01:34:41 Sort of like diversity in all its forms is beneficial

01:34:46 even if some people are assholes

01:34:48 or some robots are assholes.

01:34:49 So like it’s beneficial to have that

01:34:51 because you can’t always a priori know

01:34:56 what’s good, what’s bad.

01:34:58 But that’s a fascinating.

01:35:01 Absolutely.

01:35:02 Diversity is the bread and butter.

01:35:04 I mean, if you’re running an evolution,

01:35:05 you see diversity is the one fundamental thing

01:35:08 you have to have.

01:35:09 And absolutely, also, it’s not always good diversity.

01:35:12 It may be something that can be destructive.

01:35:14 We had in these hyenas simulations,

01:35:16 we have hyenas that just are suicidal.

01:35:19 They just run and get killed.

01:35:20 But they form the basis of those

01:35:22 who actually are really fast,

01:35:24 but stop before they get killed

01:35:26 and eventually turn into this mob.

01:35:28 So there might be something useful there

01:35:30 if it’s recombined with something else.

01:35:32 So I think that as long as we can tolerate some of that,

01:35:34 it may turn into something better.

01:35:36 You may change the rules

01:35:38 because it’s so much more efficient to do something

01:35:40 that was actually against the rules before.

01:35:43 And we’ve seen society change over time

01:35:46 quite a bit along those lines.

01:35:47 That there were rules in society

01:35:49 that we don’t believe are fair anymore,

01:35:52 even though they were considered proper behavior before.

01:35:57 So things are changing.

01:35:58 And I think that in that sense,

01:35:59 I think it’s a good idea to be able to tolerate

01:36:03 some of that cheating

01:36:04 because eventually we might turn into something better.

01:36:07 So yeah, I think this is a message

01:36:08 to the trolls and the assholes of the internet

01:36:11 that you too have a beautiful purpose

01:36:13 in this human ecosystem.

01:36:15 So I appreciate you very much.

01:36:16 In moderate quantities, yeah.

01:36:18 In moderate quantities.

01:36:20 So there’s a whole field of artificial life.

01:36:22 I don’t know if you’re connected to this field,

01:36:24 if you pay attention.

01:36:26 Is, do you think about this kind of thing?

01:36:29 Is there impressive demonstration to you

01:36:32 of artificial life?

01:36:33 Do you think of the agency you work with

01:36:35 in the evolutionary computation perspective as life?

01:36:41 And where do you think this is headed?

01:36:43 Like, is there interesting systems

01:36:45 that we’ll be creating more and more

01:36:47 that make us redefine, maybe rethink

01:36:50 about the nature of life?

01:36:52 Different levels of definition and goals there.

01:36:55 I mean, at some level, artificial life

01:36:58 can be considered multiagent systems

01:37:01 that build a society that again, achieves a goal.

01:37:04 And it might be robots that go into a building

01:37:06 and clean it up or after an earthquake or something.

01:37:09 You can think of that as an artificial life problem

01:37:11 in some sense.

01:37:13 Or you can really think of it, artificial life,

01:37:15 as a simulation of life and a tool to understand

01:37:20 what life is and how life evolved on earth.

01:37:24 And like I said, in artificial life conference,

01:37:26 there are branches of that conference sessions

01:37:29 of people who really worry about molecular designs

01:37:33 and the start of life, like I said,

01:37:36 primordial soup where eventually

01:37:37 you get something self replicating.

01:37:39 And they’re really trying to build that.

01:37:41 So it’s a whole range of topics.

01:37:46 And I think that artificial life is a great tool

01:37:50 to understand life.

01:37:53 And there are questions like sustainability,

01:37:56 species, we’re losing species.

01:37:59 How bad is it?

01:38:00 Is it natural?

01:38:02 Is there a tipping point?

01:38:05 And where are we going?

01:38:06 I mean, like the hyena evolution,

01:38:08 we may have understood that there’s a pivotal point

01:38:11 in their evolution.

01:38:12 They discovered cooperation and coordination.

01:38:16 Artificial life simulations can identify that

01:38:18 and maybe encourage things like that.

01:38:22 And also societies can be seen as a form of life itself.

01:38:28 I mean, we’re not talking about biological evolution,

01:38:30 evolution of societies.

01:38:31 Maybe some of the same phenomena emerge in that domain

01:38:36 and having artificial life simulations and understanding

01:38:40 could help us build better societies.

01:38:42 Yeah, and thinking from a meme perspective

01:38:45 of from Richard Dawkins,

01:38:50 that maybe the organisms, ideas of the organisms,

01:38:54 not the humans in these societies that from,

01:38:58 it’s almost like reframing what is exactly evolving.

01:39:01 Maybe the interesting,

01:39:02 the humans aren’t the interesting thing

01:39:04 as the contents of our minds is the interesting thing.

01:39:07 And that’s what’s multiplying.

01:39:09 And that’s actually multiplying and evolving

01:39:10 in a much faster timescale.

01:39:13 And that maybe has more power on the trajectory

01:39:16 of life on earth than does biological evolution

01:39:19 is the evolution of these ideas.

01:39:20 Yes, and it’s fascinating, like I said before,

01:39:23 that we can keep up somehow biologically.

01:39:27 We evolved to a point where we can keep up

01:39:30 with this meme evolution, literature, internet.

01:39:35 We understand DNA and we understand fundamental particles.

01:39:38 We didn’t start that way a thousand years ago.

01:39:41 And we haven’t evolved biologically very much,

01:39:43 but somehow our minds are able to extend.

01:39:46 And therefore AI can be seen also as one such step

01:39:51 that we created and it’s our tool.

01:39:53 And it’s part of that meme evolution that we created,

01:39:56 even if our biological evolution does not progress as fast.

01:39:59 And us humans might only be able to understand so much.

01:40:03 We’re keeping up so far,

01:40:05 or we think we’re keeping up so far,

01:40:07 but we might need AI systems to understand.

01:40:09 Maybe like the physics of the universe is operating,

01:40:13 look at strength theory.

01:40:14 Maybe it’s operating in much higher dimensions.

01:40:17 Maybe we’re totally, because of our cognitive limitations,

01:40:21 are not able to truly internalize the way this world works.

01:40:25 And so we’re running up against the limitation

01:40:28 of our own minds.

01:40:30 And we have to create these next level organisms

01:40:33 like AI systems that would be able to understand much deeper,

01:40:36 like really understand what it means to live

01:40:38 in a multi dimensional world

01:40:41 that’s outside of the four dimensions,

01:40:42 the three of space and one of time.

01:40:45 Translation, and generally we can deal with the world,

01:40:48 even if you don’t understand all the details,

01:40:49 we can use computers, even though we don’t,

01:40:52 most of us don’t know all the structure

01:40:54 that’s underneath or drive a car.

01:40:55 I mean, there are many components,

01:40:57 especially new cars that you don’t quite fully know,

01:40:59 but you have the interface, you have an abstraction of it

01:41:02 that allows you to operate it and utilize it.

01:41:05 And I think that that’s perfectly adequate

01:41:08 and we can build on it.

01:41:09 And AI can play a similar role.

01:41:13 I have to ask about beautiful artificial life systems

01:41:18 or evolutionary computation systems.

01:41:20 Cellular automata to me,

01:41:23 I remember it was a game changer for me early on in life

01:41:26 when I saw Conway’s Game of Life

01:41:28 who recently passed away, unfortunately.

01:41:31 And it’s beautiful

01:41:36 how much complexity can emerge from such simple rules.

01:41:40 I just don’t, somehow that simplicity

01:41:44 is such a powerful illustration

01:41:47 and also humbling because it feels like I personally,

01:41:50 from my perspective,

01:41:50 understand almost nothing about this world

01:41:54 because like my intuition fails completely

01:41:58 how complexity can emerge from such simplicity.

01:42:01 Like my intuition fails, I think,

01:42:02 is the biggest problem I have.

01:42:05 Do you find systems like that beautiful?

01:42:08 Is there, do you think about cellular automata?

01:42:11 Because cellular automata don’t really have,

01:42:15 and many other artificial life systems

01:42:17 don’t necessarily have an objective.

01:42:18 Maybe that’s a wrong way to say it.

01:42:21 It’s almost like it’s just evolving and creating.

01:42:28 And there’s not even a good definition

01:42:29 of what it means to create something complex

01:42:33 and interesting and surprising,

01:42:34 all those words that you said.

01:42:37 Is there some of those systems that you find beautiful?

01:42:41 Yeah, yeah.

01:42:41 And similarly, evolution does not have a goal.

01:42:45 It is responding to current situation

01:42:49 and survival then creates more complexity

01:42:52 and therefore we have something that we perceive as progress

01:42:56 but that’s not what evolution is inherently set to do.

01:43:00 And yeah, that’s really fascinating

01:43:03 how a simple set of rules or simple mappings can,

01:43:10 how from such simple mappings, complexity can emerge.

01:43:14 So it’s a question of emergence and self organization.

01:43:17 And the game of life is one of the simplest ones

01:43:21 and very visual and therefore it drives home the point

01:43:25 that it’s possible that nonlinear interactions

01:43:29 and this kind of complexity can emerge from them.

01:43:34 And biology and evolution is along the same lines.

01:43:37 We have simple representations.

01:43:40 DNA, if you really think of it, it’s not that complex.

01:43:44 It’s a long sequence of them, there’s lots of them

01:43:46 but it’s a very simple representation.

01:43:48 And similarly with evolutionary computation,

01:43:49 whatever string or tree representation we have

01:43:52 and the operations, the amount of code that’s required

01:43:57 to manipulate those, it’s really, really little.

01:44:00 And of course, game of life even less.

01:44:02 So how complexity emerges from such simple principles,

01:44:06 that’s absolutely fascinating.

01:44:09 The challenge is to be able to control it

01:44:11 and guide it and direct it so that it becomes useful.

01:44:15 And like game of life is fascinating to look at

01:44:17 and evolution, all the forms that come out is fascinating

01:44:21 but can we actually make it useful for us?

01:44:24 And efficient because if you actually think about

01:44:26 each of the cells in the game of life as a living organism,

01:44:30 there’s a lot of death that has to happen

01:44:32 to create anything interesting.

01:44:34 And so I guess the question is for us humans

01:44:36 that are mortal and then life ends quickly,

01:44:38 we wanna kinda hurry up and make sure we take evolution,

01:44:44 the trajectory that is a little bit more efficient

01:44:47 than the alternatives.

01:44:49 And that touches upon something we talked about earlier

01:44:51 that evolution competition is very impatient.

01:44:54 We have a goal, we want it right away

01:44:57 whereas this biology has a lot of time and deep time

01:45:01 and weak pressure and large populations.

01:45:04 One great example of this is the novelty search.

01:45:08 So evolutionary computation

01:45:11 where you don’t actually specify a fitness goal,

01:45:14 something that is your actual thing that you want

01:45:17 but you just reward solutions that are different

01:45:20 from what you’ve seen before, nothing else.

01:45:23 And you know what?

01:45:25 You actually discover things

01:45:26 that are interesting and useful that way.

01:45:29 Ken Stanley and Joel Lehmann did this one study

01:45:31 where they actually tried to evolve walking behavior

01:45:34 on robots.

01:45:35 And that’s actually, we talked about earlier

01:45:36 where your robot actually failed in all kinds of ways

01:45:39 and eventually discovered something

01:45:40 that was a very efficient walk.

01:45:43 And it was because they rewarded things that were different

01:45:48 that you were able to discover something.

01:45:50 And I think that this is crucial

01:45:52 because in order to be really different

01:45:55 from what you already have,

01:45:56 you have to utilize what is there in a domain

01:45:59 to create something really different.

01:46:00 So you have encoded the fundamentals of your world

01:46:05 and then you make changes to those fundamentals

01:46:08 you get further away.

01:46:09 So that’s probably what’s happening

01:46:11 in these systems of emergence.

01:46:14 That the fundamentals are there.

01:46:17 And when you follow those fundamentals

01:46:18 you get into points

01:46:20 and some of those are actually interesting and useful.

01:46:22 Now, even in that robotic Walker simulation

01:46:25 there was a large set of garbage,

01:46:28 but among them, there were some of these gems.

01:46:31 And then those are the ones

01:46:32 that somehow you have to outside recognize and make useful.

01:46:36 But this kind of productive systems

01:46:38 if you code them the right kind of principles

01:46:41 I think that encode the structure of the domain

01:46:45 then you will get to these solutions and discoveries.

01:46:49 It feels like that might also be a good way to live life.

01:46:52 So let me ask, do you have advice for young people today

01:46:58 about how to live life or how to succeed in their career

01:47:01 or forget career, just succeed in life

01:47:04 from an evolution and computation perspective?

01:47:08 Yes, yes, definitely.

01:47:11 Explore, diversity, exploration and individuals

01:47:17 take classes in music, history, philosophy,

01:47:22 math, engineering, see connections between them,

01:47:27 travel, learn a language.

01:47:30 I mean, all this diversity is fascinating

01:47:32 and we have it at our fingertips today.

01:47:35 It’s possible, you have to make a bit of an effort

01:47:37 because it’s not easy, but the rewards are wonderful.

01:47:42 Yeah, there’s something interesting

01:47:43 about an objective function of new experiences.

01:47:47 So try to figure out, I mean,

01:47:51 what is the maximally new experience I could have today?

01:47:56 And that sort of that novelty, optimizing for novelty

01:47:59 for some period of time might be very interesting way

01:48:01 to sort of maximally expand the sets of experiences you had

01:48:06 and then ground from that perspective,

01:48:11 like what will be the most fulfilling trajectory

01:48:14 through life.

01:48:15 Of course, the flip side of that is where I come from.

01:48:19 Again, maybe Russian, I don’t know.

01:48:20 But the choice has a detrimental effect, I think,

01:48:25 at least from my mind where scarcity has an empowering effect.

01:48:31 So if I sort of, if I have very little of something

01:48:37 and only one of that something, I will appreciate it deeply

01:48:40 until I came to Texas recently

01:48:44 and I’ve been pigging out on delicious, incredible meat.

01:48:47 I’ve been fasting a lot, so I need to do that again.

01:48:49 But when you fast for a few days,

01:48:52 that the first taste of a food is incredible.

01:48:56 So the downside of exploration is that somehow,

01:49:05 maybe you can correct me,

01:49:06 but somehow you don’t get to experience deeply

01:49:11 any one of the particular moments,

01:49:13 but that could be a psychology thing.

01:49:15 That could be just a very human peculiar,

01:49:18 flaw.

01:49:23 Yeah, I didn’t mean that you superficially explore.

01:49:26 I mean, you can.

01:49:27 Explore deeply.

01:49:28 Yeah, so you don’t have to explore 100 things,

01:49:31 but maybe a few topics

01:49:33 where you can take a deep enough dive

01:49:36 that you gain an understanding.

01:49:39 You yourself have to decide at some point

01:49:42 that this is deep enough.

01:49:44 And I obtained what I can from this topic

01:49:49 and now it’s time to move on.

01:49:51 And that might take years.

01:49:53 People sometimes switch careers

01:49:56 and they may stay on some career for a decade

01:49:59 and switch to another one.

01:50:00 You can do it.

01:50:01 You’re not pretty determined to stay where you are,

01:50:04 but in order to achieve something,

01:50:09 10,000 hours makes,

01:50:10 you need 10,000 hours to become an expert on something.

01:50:13 So you don’t have to become an expert,

01:50:15 but they even develop an understanding

01:50:17 and gain the experience that you can use later.

01:50:19 You probably have to spend, like I said, it’s not easy.

01:50:21 You’ve got to spend some effort on it.

01:50:24 Now, also at some point then,

01:50:26 when you have this diversity

01:50:28 and you have these experiences, exploration,

01:50:30 you may want to,

01:50:32 you may find something that you can’t stay away from.

01:50:35 Like for us, it was computers, it was AI.

01:50:38 It was, you know, that I just have to do it.

01:50:41 And I, you know, and then it will take decades maybe

01:50:45 and you are pursuing it

01:50:46 because you figured out that this is really exciting

01:50:49 and you can bring in your experiences.

01:50:51 And there’s nothing wrong with that either,

01:50:52 but you asked what’s the advice for young people.

01:50:55 That’s the exploration part.

01:50:57 And then beyond that, after that exploration,

01:51:00 you actually can focus and build a career.

01:51:03 And, you know, even there you can switch multiple times,

01:51:05 but I think that diversity exploration is fundamental

01:51:09 to having a successful career as is concentration

01:51:13 and spending an effort where it matters.

01:51:15 And, but you are in better position to make the choice

01:51:18 when you have done your homework.

01:51:20 Explored.

01:51:21 So exploration precedes commitment, but both are beautiful.

01:51:24 Yeah.

01:51:26 So again, from an evolutionary computation perspective,

01:51:29 we’ll look at all the agents that had to die

01:51:32 in order to come up with different solutions in simulation.

01:51:35 What do you think from that individual agent’s perspective

01:51:40 is the meaning of it all?

01:51:41 So far as humans, you’re just one agent

01:51:43 who’s going to be dead, unfortunately, one day too soon.

01:51:48 What do you think is the why

01:51:51 of why that agent came to be

01:51:55 and eventually will be no more?

01:51:58 Is there a meaning to it all?

01:52:00 Yeah.

01:52:00 In evolution, there is meaning.

01:52:02 Everything is a potential direction.

01:52:05 Everything is a potential stepping stone.

01:52:09 Not all of them are going to work out.

01:52:11 Some of them are foundations for further improvement.

01:52:16 And even those that are perhaps going to die out

01:52:21 were potential energies, potential solutions.

01:52:25 In biology, we see a lot of species die off naturally.

01:52:28 And you know, like the dinosaurs,

01:52:29 I mean, they were really good solution for a while,

01:52:31 but then it didn’t turned out to be

01:52:33 not such a good solution in the long term.

01:52:37 When there’s an environmental change,

01:52:39 you have to have diversity.

01:52:40 Some other solutions become better.

01:52:42 Doesn’t mean that that was an attempt.

01:52:45 It didn’t quite work out or last,

01:52:47 but there are still dinosaurs among us,

01:52:49 at least their relatives.

01:52:51 And they may one day again be useful, who knows?

01:52:55 So from an individual’s perspective,

01:52:57 you got to think of a bigger picture

01:52:59 that it is a huge engine that is innovative.

01:53:04 And these elements are all part of it,

01:53:06 potential innovations on their own.

01:53:09 And also as raw material perhaps,

01:53:12 or stepping stones for other things that could come after.

01:53:16 But it still feels from an individual perspective

01:53:18 that I matter a lot.

01:53:21 But even if I’m just a little cog in a giant machine,

01:53:24 is that just a silly human notion

01:53:28 in an individualistic society, no, she’ll let go of that?

01:53:32 Do you find beauty in being part of the giant machine?

01:53:36 Yeah, I think it’s meaningful.

01:53:38 I think it adds purpose to your life

01:53:41 that you are part of something bigger.

01:53:45 That said, do you ponder your individual agent’s mortality?

01:53:51 Do you think about death?

01:53:53 Do you fear death?

01:53:56 Well, certainly more now than when I was a youngster

01:54:00 and did skydiving and paragliding and all these things.

01:54:05 You’ve become wiser.

01:54:09 There is a reason for this life arc

01:54:13 that younger folks are more fearless in many ways.

01:54:17 That’s part of the exploration.

01:54:20 They are the individuals who think,

01:54:22 hmm, I wonder what’s over those mountains

01:54:24 or what if I go really far in that ocean?

01:54:27 What would I find?

01:54:27 I mean, older folks don’t necessarily think that way,

01:54:32 but younger do and it’s kind of counterintuitive.

01:54:34 So yeah, but logically it’s like,

01:54:39 you have a limited amount of time,

01:54:40 what can you do with it that matters?

01:54:42 So you try to, you have done your exploration,

01:54:45 you committed to a certain direction

01:54:48 and you become an expert perhaps in it.

01:54:50 What can I do that matters

01:54:52 with the limited resources that I have?

01:54:55 That’s how I think a lot of people, myself included,

01:54:59 start thinking later on in their career.

01:55:02 And like you said, leave a bit of a trace

01:55:05 and a bit of an impact even though after the agent is gone.

01:55:08 Yeah, that’s the goal.

01:55:11 Well, this was a fascinating conversation.

01:55:13 I don’t think there’s a better way to end it.

01:55:15 Thank you so much.

01:55:16 So first of all, I’m very inspired

01:55:19 of how vibrant the community at UT Austin and Austin is.

01:55:22 It’s really exciting for me to see it.

01:55:25 And this whole field seems like profound philosophically,

01:55:29 but also the path forward

01:55:31 for the artificial intelligence community.

01:55:33 So thank you so much for explaining

01:55:35 so many cool things to me today

01:55:36 and for wasting all of your valuable time with me.

01:55:39 Oh, it was a pleasure.

01:55:40 Thanks.

01:55:41 I appreciate it.

01:55:42 Thanks for listening to this conversation

01:55:44 with Risto McAlignan.

01:55:45 And thank you to the Jordan Harbinger Show,

01:55:48 Grammarly, Belcampo, and Indeed.

01:55:51 Check them out in the description to support this podcast.

01:55:55 And now let me leave you with some words from Carl Sagan.

01:55:59 Extinction is the rule.

01:56:01 Survival is the exception.

01:56:04 Thank you for listening.

01:56:05 I hope to see you next time.