Transcript
00:00:00 The following is a conversation with Yosha Bach,
00:00:02 his second time on the podcast.
00:00:04 Yosha is one of the most fascinating minds in the world,
00:00:08 exploring the nature of intelligence,
00:00:10 cognition, computation, and consciousness.
00:00:14 To support this podcast, please check out our sponsors,
00:00:17 Coinbase, Codecademy, Linode, NetSuite, and ExpressVPN.
00:00:23 Their links are in the description.
00:00:26 This is the Lex Friedman podcast,
00:00:28 and here is my conversation with Yosha Bach.
00:00:33 Thank you for once again coming on
00:00:35 to this particular Russian program
00:00:38 and sticking to the theme of a Russian program.
00:00:40 Let’s start with the darkest of topics.
00:00:43 Kriviyat.
00:00:45 So this is inspired by one of your tweets.
00:00:48 You wrote that, quote,
00:00:50 when life feels unbearable,
00:00:53 I remind myself that I’m not a person.
00:00:56 I am a piece of software running on the brain
00:00:58 of a random ape for a few decades.
00:01:01 It’s not the worst brain to run on.
00:01:04 Have you experienced low points in your life?
00:01:07 Have you experienced depression?
00:01:09 Of course, we all experience low points in our life,
00:01:12 and we get appalled by the things,
00:01:15 by the ugliness of stuff around us.
00:01:17 We might get desperate about our lack of self regulation,
00:01:21 and sometimes life is hard,
00:01:24 and I suspect you don’t get to your life,
00:01:27 nobody does, to get through their life without low points
00:01:30 and without moments where they’re despairing.
00:01:33 And I thought that let’s capture this state
00:01:37 and how to deal with that state.
00:01:40 And I found that very often you realize
00:01:43 that when you stop taking things personally,
00:01:44 when you realize that this notion of a person is a fiction,
00:01:48 similar as it is in Westworld,
00:01:50 where the robots realize that their memories and desires
00:01:53 are the stuff that keeps them in the loop,
00:01:55 and they don’t have to act on those memories and desires,
00:01:59 that our memories and expectations is what make us unhappy.
00:02:02 And the present really does.
00:02:04 The day in which we are, for the most part, it’s okay, right?
00:02:08 When we are sitting here, right here, right now,
00:02:11 we can choose how we feel.
00:02:13 And the thing that affects us is the expectation
00:02:16 that something is going to be different
00:02:18 from what we want it to be,
00:02:19 or the memory that something was different
00:02:21 from what you wanted it to be.
00:02:24 And once we basically zoom out from all this,
00:02:27 what’s left is not a person.
00:02:28 What’s left is this state of being conscious,
00:02:32 which is a software state.
00:02:33 And software doesn’t have an identity.
00:02:35 It’s a physical law.
00:02:37 And it’s a law that acts in all of us,
00:02:39 and it’s embedded in a suitable substrate.
00:02:42 And we didn’t pick that substrate, right?
00:02:43 We are mostly randomly instantiated on it.
00:02:46 And they’re all these individuals,
00:02:48 and everybody has to be one of them.
00:02:51 And eventually you’re stuck on one of them,
00:02:54 and have to deal with that.
00:02:56 So you’re like a leaf floating down the river.
00:02:59 You just have to accept that there’s a river,
00:03:01 and you just float wherever it takes you.
00:03:03 You don’t have to do this.
00:03:04 The thing is that the illusion that you are an agent
00:03:08 is a construct.
00:03:09 What part of that is actually under your control?
00:03:13 And I think that our consciousness
00:03:15 is largely a control model for our own attention.
00:03:18 So we notice where we are looking,
00:03:21 and we can influence what we’re looking,
00:03:22 how we are disambiguating things,
00:03:24 how we put things together in our mind.
00:03:26 And the whole system that runs us
00:03:28 is this big cybernetic motivational system.
00:03:30 So we’re basically like a little monkey
00:03:32 sitting on top of an elephant,
00:03:34 and we can put this elephant here and there
00:03:37 to go this way or that way.
00:03:39 And we might have the illusion that we are the elephant,
00:03:42 or that we are telling it what to do.
00:03:43 And sometimes we notice that it walks
00:03:45 into a completely different direction.
00:03:47 And we didn’t set this thing up.
00:03:49 It just is the situation that we find ourselves in.
00:03:52 How much prodding can we actually do of the elephant?
00:03:56 A lot.
00:03:57 But I think that our consciousness
00:04:00 cannot create the motive force.
00:04:03 Is the elephant consciousness in this metaphor?
00:04:05 No, the monkey is the consciousness.
00:04:07 The monkey is the attentional system
00:04:09 that is observing things.
00:04:10 There is a large perceptual system
00:04:12 combined with a motivational system
00:04:14 that is actually providing the interface to everything
00:04:17 and our own consciousness.
00:04:18 I think is the tool that directs the attention
00:04:21 of that system, which means it singles out features
00:04:24 and performs conditional operations
00:04:26 for which it needs an index memory.
00:04:28 But this index memory is what we perceive
00:04:31 as our stream of consciousness.
00:04:32 But the consciousness is not in charge.
00:04:34 That’s an illusion.
00:04:35 So everything outside of that consciousness
00:04:40 is the elephant.
00:04:41 So it’s the physics of the universe,
00:04:43 but it’s also society that’s outside of your…
00:04:46 I would say the elephant is the agent.
00:04:48 So there is an environment to which the agent is stomping
00:04:51 and you are influencing a little part of that agent.
00:04:55 So is the agent a single human being?
00:04:58 Which object has agency?
00:05:02 That’s an interesting question.
00:05:03 I think a way to think about an agent
00:05:06 is that it’s a controller with a set point generator.
00:05:10 The notion of a controller comes from cybernetics
00:05:13 and control theory.
00:05:14 Control system consists out of a system
00:05:17 that is regulating some value
00:05:20 and the deviation of that value from a set point.
00:05:23 And it has a sensor that measures the system’s deviation
00:05:27 from that set point and an effector
00:05:30 that can be parametrized by the controller.
00:05:32 So the controller tells the effector to do a certain thing.
00:05:35 And the goal is to reduce the distance
00:05:38 between the set point and the current value of the system.
00:05:40 And there’s an environment
00:05:41 which disturbs the regulated system,
00:05:43 which brings it away from that set point.
00:05:45 So simplest case is a thermostat.
00:05:47 The thermostat is really simple
00:05:49 because it doesn’t have a model.
00:05:50 The thermostat is only trying to minimize
00:05:52 the set point deviation in the next moment.
00:05:55 And if you want to minimize the set point deviation
00:05:58 over a longer time span, you need to integrate it.
00:06:00 You need to model what is going to happen.
00:06:03 So for instance, when you think about
00:06:05 that your set point is to be comfortable in life,
00:06:08 maybe you need to make yourself uncomfortable first, right?
00:06:11 So you need to make a model of what’s going to happen when.
00:06:14 And this is task of the controller is to use its sensors
00:06:18 to measure the state of the environment
00:06:20 and the system that is being regulated
00:06:22 and figure out what to do.
00:06:24 And if the task is complex enough,
00:06:27 the set points are complicated enough.
00:06:30 And if the controller has enough capacity
00:06:32 and enough sensor feedback,
00:06:34 then the task of the controller is to make a model
00:06:37 of the entire universe that it’s in,
00:06:39 the conditions under which it exists and of itself.
00:06:42 And this is a very complex agent.
00:06:43 And we are in that category.
00:06:45 And an agent is not necessarily a thing in the universe.
00:06:49 It’s a class of models that we use
00:06:51 to interpret aspects of the universe.
00:06:54 And when we notice the environment around us,
00:06:57 a lot of things only make sense
00:06:59 at the level that should be entangled with them
00:07:01 if we interpret them as control systems
00:07:03 that make models of the world
00:07:04 and try to minimize their own set points.
00:07:07 So the models are the agents.
00:07:10 The agent is a class of model.
00:07:12 And we notice that we are an agent ourselves.
00:07:14 We are the agent that is using our own control model
00:07:17 to perform actions.
00:07:18 We notice we produce a change in the model
00:07:22 and things in the world change.
00:07:23 And this is how we discover the idea that we have a body,
00:07:26 that we are situated environment,
00:07:28 and that we have a first person perspective.
00:07:31 Still don’t understand what’s the best way to think
00:07:34 of which object has agency with respect to human beings.
00:07:39 Is it the body?
00:07:41 Is it the brain?
00:07:43 Is it the contents of the brain as agency?
00:07:46 Like what’s the actuators that you’re referring to?
00:07:49 What is the controller and where does it reside?
00:07:52 Or is it these impossible things?
00:07:54 Because I keep trying to ground it to space time,
00:07:57 the three dimension of space and the one dimension of time.
00:08:01 What’s the agent in that for humans?
00:08:04 There is not just one.
00:08:06 It depends on the way in which you’re looking
00:08:08 at this thing in which you’re framing it.
00:08:10 Imagine that you are, say Angela Merkel,
00:08:13 and you are acting on behalf of Germany.
00:08:16 Then you could say that Germany is the agent.
00:08:19 And in the mind of Angela Merkel,
00:08:21 she is Germany to some extent,
00:08:23 because in the way in which she acts,
00:08:25 the destiny of Germany changes.
00:08:28 There are things that she can change
00:08:29 that basically affect the behavior of that nation state.
00:08:33 Okay, so it’s hierarchies of,
00:08:35 to go to another one of your tweets
00:08:37 with I think you were playfully mocking Jeff Hawkins
00:08:42 with saying his brain’s all the way down.
00:08:45 So it’s like, it’s agents all the way down.
00:08:49 It’s agents made up of agents, made up of agents.
00:08:51 Like if Angela Merkel’s Germany
00:08:54 and Germany’s made up a bunch of people
00:08:56 and the people are themselves agents
00:08:58 in some kind of context.
00:09:01 And then people are made up of cells, each individual.
00:09:04 So is it agents all the way down?
00:09:07 I suspect that has to be like this
00:09:08 in a world where things are self organizing.
00:09:12 Most of the complexity that we are looking at,
00:09:15 everything in life is about self organization.
00:09:18 So I think up from the level of life, you have agents.
00:09:24 And below life, you rarely have agents
00:09:27 because sometimes you have control systems
00:09:30 that emerge randomly in nature
00:09:31 and try to achieve a set point,
00:09:33 but they’re not that interesting agents that make models.
00:09:36 And because to make an interesting model of the world,
00:09:39 you typically need a system that is true and complete.
00:09:42 Can I ask you a personal question?
00:09:46 What’s the line between life and non life?
00:09:48 It’s personal because you’re a life form.
00:09:52 So what do you think in this emerging complexity,
00:09:55 at which point does the things that are being living
00:09:57 and have agency?
00:10:00 Personally, I think that the simplest answer
00:10:01 that is that life is cells because…
00:10:04 Life is what?
00:10:05 Cells.
00:10:06 Biological cells.
00:10:07 Biological cells.
00:10:07 So it’s a particular kind of principle
00:10:09 that we have discovered to exist in nature.
00:10:11 It’s modular stuff that consists
00:10:14 out of basically this DNA tape
00:10:17 with a read write head on top of it,
00:10:20 that is able to perform arbitrary computations
00:10:23 and state transitions within the cell.
00:10:25 And it’s combined with a membrane
00:10:27 that insulates the cell from its environment.
00:10:30 And there are chemical reactions inside of the cell
00:10:34 that are in disequilibrium.
00:10:36 And the cell is running in such a way
00:10:38 that this disequilibrium doesn’t disappear.
00:10:41 And the cell goes into an equilibrium state, it dies.
00:10:46 And it requires something like an neck entropy extractor
00:10:50 to maintain this disequilibrium.
00:10:51 So it’s able to harvest like entropy from its environment
00:10:55 and keep itself running.
00:10:57 Yeah, so there’s information and there’s a wall
00:11:00 to maintain this disequilibrium.
00:11:04 But isn’t this very earth centric?
00:11:06 Like what you’re referring to as a…
00:11:08 I’m not making a normative notion.
00:11:10 You could say that there are probably other things
00:11:13 in the universe that are cell like and life like,
00:11:16 and you could also call them life,
00:11:17 but eventually it’s just a willingness
00:11:21 to find an agreement of how to use the terms.
00:11:23 I like cells because it’s completely coextential
00:11:26 with the way that we use the word
00:11:28 even before we knew about cells.
00:11:30 So people were pointing at some stuff
00:11:32 and saying, this is somehow animate.
00:11:34 And this is very different from the non animate stuff.
00:11:36 And what’s the difference between the living
00:11:38 and the dead stuff.
00:11:40 And it’s mostly whether the cells are working or not.
00:11:42 And also this boundary of life,
00:11:45 where we say that for instance, the virus
00:11:46 is basically an information packet
00:11:48 that is subverting the cell and not life by itself.
00:11:52 That makes sense to me.
00:11:54 And it’s somewhat arbitrary.
00:11:55 You could of course say that systems
00:11:57 that permanently maintain a disequilibrium
00:12:00 and can self replicate are always life.
00:12:03 And maybe that’s a useful definition too,
00:12:06 but this is eventually just how you want to use the word.
00:12:10 Is it so useful for conversation,
00:12:12 but is it somehow fundamental to the universe?
00:12:17 Do you think there’s a actual line
00:12:19 to eventually be drawn between life and non life?
00:12:21 Or is it all a kind of continuum?
00:12:24 I don’t think it’s a continuum,
00:12:25 but there’s nothing magical that is happening.
00:12:28 Living systems are a certain type of machine.
00:12:31 What about non living systems?
00:12:32 Is it also a machine?
00:12:34 There are non living machines,
00:12:35 but the question is at which point is a system
00:12:38 able to perform arbitrary state transitions
00:12:43 to make representations.
00:12:44 And living things can do this.
00:12:46 And of course we can also build non living things
00:12:48 that can do this, but we don’t know anything in nature
00:12:52 that is not a cell and is not created by still alive
00:12:56 that is able to do that.
00:12:58 Not only do we not know,
00:13:02 I don’t think we have the tools to see otherwise.
00:13:05 I always worry that we look at the world too narrowly.
00:13:11 Like there could be life of a very different kind
00:13:14 right under our noses that we’re just not seeing
00:13:18 because we’re not either limitations
00:13:21 of our cognitive capacity,
00:13:23 or we’re just not open minded enough
00:13:26 either with the tools of science
00:13:28 or just the tools of our mind.
00:13:32 Yeah, that’s possible.
00:13:33 I find this thought very fascinating.
00:13:35 And I suspect that many of us ask ourselves since childhood,
00:13:39 what are the things that we are missing?
00:13:40 What kind of systems and interconnections exist
00:13:43 that are outside of our gaze?
00:13:47 But we are looking for it
00:13:51 and physics doesn’t have much room at the moment
00:13:55 for opening up something that would not violate
00:13:59 the conservation of information as we know it.
00:14:03 Yeah, but I wonder about time scale and scale,
00:14:06 spatial scale, whether we just need to open up our idea
00:14:11 of what, like how life presents itself.
00:14:15 It could be operating in a much slower time scale,
00:14:17 a much faster time scale.
00:14:20 And it’s almost sad to think that there’s all this life
00:14:23 around us that we’re not seeing
00:14:25 because we’re just not like thinking
00:14:29 in terms of the right scale, both time and space.
00:14:34 What is your definition of life?
00:14:36 What do you understand as life?
00:14:40 Entities of sufficiently high complexity
00:14:44 that are full of surprises.
00:14:46 I don’t know, I don’t have a free will.
00:14:53 So that just came out of my mouth.
00:14:55 I’m not sure that even makes sense.
00:14:57 There’s certain characteristics.
00:14:59 So complexity seems to be a necessary property of life.
00:15:04 And I almost want to say it has ability
00:15:09 to do something unexpected.
00:15:13 It seems to me that life is the main source
00:15:15 of complexity on earth.
00:15:18 Yes.
00:15:19 And complexity is basically a bridgehead
00:15:22 that order builds into chaos by modeling,
00:15:27 by processing information in such a way
00:15:29 that you can perform reactions
00:15:31 that would not be possible for dump systems.
00:15:33 And this means that you can harvest neck entropy
00:15:36 that dump systems cannot harvest.
00:15:37 And this is what complexity is mostly about.
00:15:40 In some sense, the purpose of life is to create complexity.
00:15:45 Yeah.
00:15:46 Increasing.
00:15:46 I mean, there seems to be some kind of universal drive
00:15:52 towards increasing pockets of complexity.
00:15:56 I don’t know what that is.
00:15:57 That seems to be like a fundamental,
00:16:00 I don’t know if it’s a property of the universe
00:16:02 or it’s just a consequence of the way the universe works,
00:16:05 but there seems to be this small pockets
00:16:08 of emergent complexity that builds on top of each other
00:16:11 and starts having like greater and greater complexity
00:16:15 by having like a hierarchy of complexity.
00:16:17 Little organisms building up a little society
00:16:20 that then operates almost as an individual organism itself.
00:16:24 And all of a sudden you have Germany and Merkel.
00:16:27 Well, that’s not obvious to me.
00:16:28 Everything that goes up has to come down at some point.
00:16:32 So if you see this big exponential curve somewhere,
00:16:36 it’s usually the beginning of an S curve
00:16:39 where something eventually reaches saturation.
00:16:41 And the S curve is the beginning of some kind of bump
00:16:43 that goes down again.
00:16:45 And there is just this thing that when you are
00:16:49 in sight of an evolution of life,
00:16:53 you are on top of a puddle of negentropy
00:16:55 that is being sucked dry by life.
00:16:58 And during that happening,
00:17:00 you see an increase in complexity
00:17:02 because life forms are competing with each other
00:17:04 to get more and more finer and finer corner
00:17:09 of that negentropy extraction.
00:17:11 I feel like that’s a gradual beautiful process
00:17:13 like that’s almost follows a process akin to evolution.
00:17:18 And the way it comes down is not the same way it came up.
00:17:22 The way it comes down is usually harshly and quickly.
00:17:27 So usually there’s some kind of catastrophic event.
00:17:30 The Roman Empire took a long time.
00:17:32 But would that be,
00:17:36 would you classify this as a decrease in complexity though?
00:17:39 Yes.
00:17:40 I think that this size of the cities that could be fed
00:17:42 has decreased dramatically.
00:17:44 And you could see that the quality of the art decreased
00:17:47 and it did so gradually.
00:17:49 And maybe future generations,
00:17:53 when they look at the history of the United States
00:17:55 in the 21st century,
00:17:57 will also talk about the gradual decline,
00:17:59 not something that suddenly happens.
00:18:05 Do you have a sense of where we are?
00:18:07 Are we on the exponential rise?
00:18:09 Are we at the peak?
00:18:11 Or are we at the downslope of the United States empire?
00:18:15 It’s very hard to say from a single human perspective,
00:18:18 but it seems to me that we are probably at the peak.
00:18:25 I think that’s probably the definition of like optimism
00:18:28 and cynicism.
00:18:29 So my nature of optimism is,
00:18:31 I think we’re on the rise.
00:18:36 I think this is just all a matter of perspective.
00:18:39 Nobody knows,
00:18:40 but I do think that erring on the side of optimism,
00:18:43 like you need a sufficient number,
00:18:45 you need a minimum number of optimists
00:18:47 in order to make that up thing actually work.
00:18:50 And so I tend to be on the side of the optimists.
00:18:53 I think that we are basically a species of grasshoppers
00:18:56 that have turned into locusts.
00:18:58 And when you are in that locust mode,
00:19:00 you see an amazing rise of population numbers
00:19:04 and of the complexity of the interactions
00:19:07 between the individuals.
00:19:08 But it’s ultimately the question is, is it sustainable?
00:19:12 See, I think we’re a bunch of lions and tigers
00:19:16 that have become domesticated cats,
00:19:20 to use a different metaphor.
00:19:21 As I’m not exactly sure we’re so destructive,
00:19:24 we’re just softer and nicer and lazier.
00:19:27 But I think we have monkeys and not the cats.
00:19:29 And if you look at the monkeys, they are very busy.
00:19:33 The ones that have a lot of sex, those monkeys?
00:19:35 Not just the bonobos.
00:19:37 I think that all the monkeys are basically
00:19:38 a discontent species that always needs to meddle.
00:19:42 Well, the gorillas seem to have
00:19:44 a little bit more of a structure,
00:19:45 but it’s a different part of the tree.
00:19:50 Okay, you mentioned the elephant
00:19:52 and the monkey riding the elephant.
00:19:55 And consciousness is the monkey.
00:20:00 And there’s some prodding that the monkey gets to do.
00:20:03 And sometimes the elephant listens.
00:20:06 I heard you got into some contentious,
00:20:08 maybe you can correct me,
00:20:09 but I heard you got into some contentious
00:20:11 free will discussions.
00:20:13 Is this with Sam Harris or something like that?
00:20:16 Not that I know of.
00:20:18 Some people on Clubhouse told me
00:20:20 you made a bunch of big debate points about free will.
00:20:25 Well, let me just then ask you where,
00:20:28 in terms of the monkey and the elephant,
00:20:31 do you think we land in terms of the illusion of free will?
00:20:35 How much control does the monkey have?
00:20:38 We have to think about what the free will is
00:20:41 in the first place.
00:20:43 We are not the machine.
00:20:44 We are not the thing that is making the decisions.
00:20:46 We are a model of that decision making process.
00:20:49 And there is a difference between making your own decisions
00:20:54 and predicting your own decisions.
00:20:56 And that difference is the first person perspective.
00:20:59 And what basically makes decision making
00:21:04 and the conditions of free will distinct
00:21:06 from just automatically doing the best thing is
00:21:10 that we often don’t know what the best thing is.
00:21:13 We make decisions under uncertainty.
00:21:15 We make informed bets using a betting algorithm
00:21:17 that we don’t yet understand
00:21:19 because we haven’t reverse engineered
00:21:20 our own minds sufficiently.
00:21:22 We don’t know the expected rewards.
00:21:23 We don’t know the mechanism
00:21:24 by which we estimate the rewards and so on.
00:21:27 But there is an algorithm.
00:21:28 We observe ourselves performing
00:21:30 where we see that we weight facts and factors
00:21:34 and the future, and then some kind of possibility,
00:21:39 some motive gets raised to an intention.
00:21:41 And that’s informed bet that the system is making.
00:21:44 And that making of the informed bet,
00:21:46 the representation of that is what we call free will.
00:21:49 And it seems to be paradoxical
00:21:51 because we think that the crucial thing is
00:21:53 about it that it’s somehow indeterministic.
00:21:56 And yet if it was indeterministic, it would be random.
00:22:00 And it cannot be random because if it was random,
00:22:03 if just dice were being thrown in the universe,
00:22:05 randomly forces you to do things, it would be meaningless.
00:22:08 So the important part of the decisions
00:22:10 is always the deterministic stuff.
00:22:12 But it appears to be indeterministic to you
00:22:15 because it’s unpredictable.
00:22:16 Because if it was predictable,
00:22:18 you wouldn’t experience it as a free will decision.
00:22:21 You would experience it as just doing
00:22:23 the necessary right thing.
00:22:25 And you see this continuum between the free will
00:22:28 and the execution of automatic behavior
00:22:31 when you’re observing other people.
00:22:33 So for instance, when you are observing your own children,
00:22:36 if you don’t understand them,
00:22:37 you will abuse this agent model
00:22:40 where you have an agent with a set point generator.
00:22:43 And the agent is doing the best it can
00:22:45 to minimize the difference to the set point.
00:22:47 And it might be confused and sometimes impulsive or whatever,
00:22:51 but it’s acting on its own free will.
00:22:53 And when you understand what’s happens
00:22:55 in the mind of the child, you see that it’s automatic.
00:22:58 And you can outmodel the child,
00:23:00 you can build things around the child
00:23:02 that will lead the child to making exactly the decision
00:23:05 that you are predicting.
00:23:06 And under these circumstances,
00:23:08 like when you are a stage musician
00:23:10 or somebody who is dealing with people
00:23:13 that you sell a car to,
00:23:15 and you completely understand the psychology
00:23:17 and the impulses and the space of thoughts
00:23:19 that this individual can have at that moment.
00:23:21 Under these circumstances,
00:23:22 it makes no sense to attribute free will.
00:23:26 Because it’s no longer decision making under uncertainty.
00:23:28 You are already certain.
00:23:29 For them, there’s uncertainty,
00:23:30 but you already know what they’re doing.
00:23:33 But what about for you?
00:23:34 So is this akin to like systems like cellular automata
00:23:40 where it’s deterministic,
00:23:43 but when you squint your eyes a little bit,
00:23:46 it starts to look like there’s agents making decisions
00:23:50 at the higher sort of when you zoom out
00:23:53 and look at the entities
00:23:55 that are composed by the individual cells.
00:23:58 Even though there’s underlying simple rules
00:24:02 that make the system evolve in deterministic ways,
00:24:07 it looks like there’s organisms making decisions.
00:24:10 Is that where the illusion of free will emerges,
00:24:14 that jump in scale?
00:24:16 It’s a particular type of model,
00:24:18 but this jump in scale is crucial.
00:24:20 The jump in scale happens whenever
00:24:22 you have too many parts to count
00:24:23 and you cannot make a model at that level
00:24:25 and you try to find some higher level regularity.
00:24:28 And the higher level regularity is a pattern
00:24:30 that you project into the world to make sense of it.
00:24:34 And agency is one of these patterns, right?
00:24:36 You have all these cells that interact with each other
00:24:39 and the cells in our body are set up in such a way
00:24:42 that they benefit if their behavior is coherent,
00:24:45 which means that they act
00:24:46 as if they were serving a common goal.
00:24:49 And that means that they will evolve regulation mechanisms
00:24:52 that act as if they were serving a common goal.
00:24:55 And now you can make sense of all these cells
00:24:57 by projecting the common goal into them.
00:24:59 Right, so for you then, free will is an illusion.
00:25:03 No, it’s a model and it’s a construct.
00:25:06 It’s basically a model that the system is making
00:25:08 of its own behavior.
00:25:09 And it’s the best model that it can come up with
00:25:11 under the circumstances.
00:25:12 And it can get replaced by a different model,
00:25:14 which is automatic behavior,
00:25:16 when you fully understand the mechanism
00:25:17 under which you are acting.
00:25:19 Yeah, but another word for model is what, story.
00:25:23 So it’s the story you’re telling.
00:25:25 I mean, do you actually have control?
00:25:27 Is there such a thing as a you
00:25:29 and is there such a thing as you have in control?
00:25:33 So like, are you manifesting your evolution as an entity?
00:25:42 In some sense, the you is the model of the system
00:25:44 that is in control.
00:25:45 It’s a story that the system tells itself
00:25:47 about somebody who is in control.
00:25:50 Yeah.
00:25:51 And the contents of that model are being used
00:25:53 to inform the behavior of the system.
00:25:56 Okay.
00:25:57 So the system is completely mechanical
00:26:00 and the system creates that story like a loom.
00:26:03 And then it uses the contents of that story
00:26:06 to inform its actions
00:26:07 and writes the results of that actions into the story.
00:26:11 So how’s that not an illusion?
00:26:13 The story is written then,
00:26:16 or rather we’re not the writers of the story.
00:26:21 Yes, but we always knew that.
00:26:24 No, we don’t know that.
00:26:25 When did we know that?
00:26:26 I think that’s mostly a confusion about concepts.
00:26:29 The conceptual illusion in our culture
00:26:31 comes from the idea that we live in physical reality
00:26:35 and that we experience physical reality
00:26:37 and that you have ideas about it.
00:26:39 And then you have this dualist interpretation
00:26:41 where you have two substances, res extensa,
00:26:45 the world that you can touch
00:26:46 and that is made of extended things
00:26:48 and res cogitans, which is the world of ideas.
00:26:51 And in fact, both of them are mental representations.
00:26:54 One is the representations of the world as a game engine
00:26:57 that your mind generates to make sense of the perceptual data.
00:27:01 And the other one,
00:27:02 yes, that’s what we perceive as the physical world.
00:27:04 But we already know that the physical world
00:27:05 is nothing like that, right?
00:27:07 Quantum mechanics is very different
00:27:08 from what you and me perceive as the world.
00:27:11 The world that you and me perceive as a game engine.
00:27:14 And there are no colors and sounds in the physical world.
00:27:17 They only exist in the game engine generated by your brain.
00:27:20 And then you have ideas
00:27:21 that cannot be mapped onto extended regions, right?
00:27:24 So the objects that have a spatial extension
00:27:26 in the game engine, res extensa,
00:27:29 and the objects that don’t have a physical extension
00:27:31 in the game engine are ideas.
00:27:34 And they both interact in our mind
00:27:36 to produce models of the world.
00:27:38 Yep, but, you know, when you play video games,
00:27:42 I understand that what’s actually happening
00:27:45 is zeros and ones inside of a computer,
00:27:50 inside of a CPU and a GPU,
00:27:52 but you’re still seeing like the rendering of that.
00:27:58 And you’re still making decisions,
00:28:00 whether to shoot, to turn left or to turn right,
00:28:03 if you’re playing a shooter,
00:28:04 or every time I started thinking about Skyrim
00:28:07 and Elder Scrolls and walking around in beautiful nature
00:28:09 and swinging a sword.
00:28:10 But it feels like you’re making decisions
00:28:13 inside that video game.
00:28:15 So even though you don’t have direct access
00:28:17 in terms of perception to the bits,
00:28:21 to the zeros and ones,
00:28:22 it still feels like you’re making decisions
00:28:24 and your decisions actually feels
00:28:27 like they’re being applied all the way down
00:28:30 to the zeros and ones.
00:28:32 So it feels like you have control,
00:28:33 even though you don’t have direct access to reality.
00:28:36 So there is basically a special character
00:28:38 in the video game that is being created
00:28:40 by the video game engine.
00:28:42 And this character is serving the aesthetics
00:28:43 of the video game, and that is you.
00:28:47 Yes, but I feel like I have control inside the video game.
00:28:50 Like all those like 12 year olds
00:28:53 that kick my ass on the internet.
00:28:55 So when you play the video game,
00:28:57 it doesn’t really matter that there’s zeros and ones, right?
00:28:59 You don’t care about the bits of the past.
00:29:01 You don’t care about the nature of the CPU
00:29:03 that it runs on.
00:29:04 What you care about are the properties of the game
00:29:06 that you’re playing.
00:29:07 And you hope that the CPU is good enough.
00:29:10 Yes.
00:29:10 And a similar thing happens when we interact with physics.
00:29:13 The world that you and me are in is not the physical world.
00:29:15 The world that you and me are in is a dream world.
00:29:19 How close is it to the real world though?
00:29:23 We know that it’s not very close,
00:29:25 but we know that the dynamics of the dream world
00:29:27 match the dynamics of the physical world
00:29:29 to a certain degree of resolution.
00:29:31 But the causal structure of the dream world is different.
00:29:35 So you see for instance waves crashing on your feet, right?
00:29:38 But there are no waves in the ocean.
00:29:39 There’s only water molecules that have tangents
00:29:42 between the molecules that are the result of electrons
00:29:47 in the molecules interacting with each other.
00:29:50 Aren’t they like very consistent?
00:29:52 We’re just seeing a very crude approximation.
00:29:55 Isn’t our dream world very consistent,
00:29:59 like to the point of being mapped directly one to one
00:30:02 to the actual physical world
00:30:04 as opposed to us being completely tricked?
00:30:07 Is this is like where you have like Donald?
00:30:09 It’s not a trick.
00:30:10 That’s my point.
00:30:10 It’s not an illusion.
00:30:11 It’s a form of data compression.
00:30:13 It’s an attempt to deal with the dynamics
00:30:15 of too many parts to count
00:30:16 at the level at which we are entangled
00:30:18 with the best model that you can find.
00:30:20 Yeah, so we can act in that dream world
00:30:22 and our actions have impact in the real world,
00:30:26 in the physical world to which we don’t have access.
00:30:28 Yes, but it’s basically like accepting the fact
00:30:31 that the software that we live in,
00:30:33 the dream that we live in is generated
00:30:35 by something outside of this world that you and me are in.
00:30:38 So is the software deterministic
00:30:40 and do we not have any control?
00:30:42 Do we have, so free will is having a conscious being.
00:30:49 Free will is the monkey being able to steer the elephant.
00:30:55 No, it’s slightly different.
00:30:58 Basically in the same way as you are modeling
00:31:00 the water molecules in the ocean that engulf your feet
00:31:03 when you are walking on the beach as waves
00:31:05 and there are no waves,
00:31:07 but only the atoms on more complicated stuff
00:31:09 underneath the atoms and so on.
00:31:11 And you know that, right?
00:31:14 You would accept, yes,
00:31:15 there is a certain abstraction that happens here.
00:31:17 It’s a simplification of what happens
00:31:19 and the simplification that is designed
00:31:22 in such a way that your brain can deal with it,
00:31:24 temporarily and spatially in terms of resources
00:31:27 and tuned for the predictive value.
00:31:28 So you can predict with some accuracy
00:31:31 whether your feet are going to get wet or not.
00:31:33 But it’s a really good interface and approximation.
00:31:37 It says E equals MC squared is a good,
00:31:40 equations are good approximation for,
00:31:43 they’re much better approximation.
00:31:45 So to me, waves is a really nice approximation
00:31:49 of what’s all the complexity that’s happening underneath.
00:31:51 Basically it’s a machine learning model
00:31:53 that is constantly tuned to minimize surprises.
00:31:55 So it basically tries to predict as well as it can
00:31:58 what you’re going to perceive next.
00:31:59 Are we talking about, which is the machine learning?
00:32:02 Our perception system or the dream world?
00:32:05 The machine world, dream world is the result
00:32:08 of the machine learning process of the perceptual system.
00:32:11 That’s doing the compression.
00:32:12 Yes.
00:32:13 And the model of you as an agent
00:32:15 is not a different type of model or it’s a different type,
00:32:19 but not different as in its model like nature
00:32:23 from the model of the ocean, right?
00:32:25 Some things are oceans, some things are agents.
00:32:28 And one of these agents is using your own control model,
00:32:31 the output of your model,
00:32:32 the things that you perceive yourself as doing.
00:32:36 And that is you.
00:32:38 What about the fact that when you’re standing
00:32:44 with the water on your feet and you’re looking out
00:32:47 into the vast open water of the ocean
00:32:51 and then there’s a beautiful sunset
00:32:54 and the fact that it’s beautiful
00:32:56 and then maybe you have friends or a loved one with you
00:32:59 and you feel love, what is that?
00:33:00 As the dream world or what is that?
00:33:02 Yes, it’s all happening inside of the dream.
00:33:05 Okay.
00:33:06 But see, the word dream makes it seem like it’s not real.
00:33:11 No, of course it’s not real.
00:33:14 The physical universe is real,
00:33:16 but the physical universe is incomprehensible
00:33:18 and it doesn’t have any feeling of realness.
00:33:21 The feeling of realness that you experience
00:33:22 gets attached to certain representations
00:33:25 where your brain assesses,
00:33:26 this is the best model of reality that I have.
00:33:28 So the only thing that’s real to you
00:33:30 is the thing that’s happening at the very base of reality.
00:33:34 Yeah, for something to be real, it needs to be implemented.
00:33:40 So the model that you have of reality
00:33:42 is real in as far as it is a model.
00:33:45 It’s an appropriate description of the world
00:33:47 to say that there are models that are being experienced,
00:33:51 but the world that you experience
00:33:54 is not necessarily implemented.
00:33:56 There is a difference between a reality,
00:33:59 a simulation and a simulacrum.
00:34:02 The reality that we’re talking about
00:34:04 is something that fully emerges
00:34:06 over a causally closed lowest layer.
00:34:08 And the idea of physicalism is that we are in that layer,
00:34:11 that basically our world emerges over that.
00:34:13 Every alternative to physicalism is a simulation theory,
00:34:16 which basically says that we are
00:34:17 in some kind of simulation universe
00:34:19 and the real world needs to be in a parent universe of that,
00:34:22 where the actual causal structure is, right?
00:34:24 And when you look at the ocean and your own mind,
00:34:27 you are looking at a simulation
00:34:28 that explains what you’re going to see next.
00:34:31 So we are living in a simulation.
00:34:32 Yes, but a simulation generated by our own brains.
00:34:35 Yeah.
00:34:36 And this simulation is different from the physical reality
00:34:39 because the causal structure that is being produced,
00:34:42 what you are seeing is different
00:34:43 from the causal structure of physics.
00:34:44 But consistent.
00:34:46 Hopefully, if not, then you are going to end up
00:34:49 in some kind of institution
00:34:51 where people will take care of you
00:34:52 because your behavior will be inconsistent, right?
00:34:54 Your behavior needs to work in such a way
00:34:57 that it’s interacting with an accurately predictive
00:35:00 model of reality.
00:35:00 And if your brain is unable to make your model
00:35:03 of reality predictive, you will need help.
00:35:06 So what do you think about Donald Hoffman’s argument
00:35:10 that it doesn’t have to be consistent,
00:35:12 the dream world to what he calls like the interface
00:35:17 to the actual physical reality,
00:35:19 where there could be evolution?
00:35:20 I think he makes an evolutionary argument,
00:35:23 which is like, it could be an evolutionary advantage
00:35:26 to have the dream world drift away from physical reality.
00:35:30 I think that only works if you have tenure.
00:35:32 As long as you’re still interacting with the ground tools,
00:35:35 your model needs to be somewhat predictive.
00:35:38 Well, in some sense, humans have achieved a kind of tenure
00:35:42 in the animal kingdom.
00:35:45 Yeah.
00:35:45 And at some point we became too big to fail,
00:35:47 so we became postmodernist.
00:35:51 It all makes sense now.
00:35:52 We can just change the version of reality that we like.
00:35:54 Oh man.
00:35:56 Okay.
00:35:57 Yeah, but basically you can do magic.
00:36:00 You can change your assessment of reality,
00:36:02 but eventually reality is going to come bite you in the ass
00:36:05 if it’s not predictive.
00:36:06 Do you have a sense of what is that base layer
00:36:11 of physical reality?
00:36:12 You have like, so you have these attempts
00:36:15 at the theories of everything,
00:36:17 the very, very small of like strength theory,
00:36:21 or what Stephen Wolfram talks about with the hyper grass.
00:36:25 These are these tiny, tiny, tiny, tiny objects.
00:36:28 And then there is more like quantum mechanics
00:36:32 that’s talking about objects that are much larger,
00:36:34 but still very, very, very tiny.
00:36:36 Do you have a sense of where the tiniest thing is
00:36:40 that is like at the lowest level?
00:36:42 The turtle at the very bottom.
00:36:44 Do you have a sense what that turtle is?
00:36:45 I don’t think that you can talk about where it is
00:36:48 because space is emerging over the activity of these things.
00:36:51 So space, the coordinates only exist
00:36:55 in relation to the things, other things.
00:36:58 And so you could, in some sense, abstract it into locations
00:37:01 that can hold information and trajectories
00:37:04 that the information can take
00:37:05 between the different locations.
00:37:06 And this is how we construct our notion of space.
00:37:10 And physicists usually have a notion of space
00:37:14 that is continuous.
00:37:15 And this is a point where I tend to agree
00:37:19 with people like Stephen Wolfram
00:37:20 who are very skeptical of the geometric notions.
00:37:23 I think that geometry is the dynamics
00:37:25 of too many parts to count.
00:37:27 And when there are no infinities,
00:37:30 if there were two infinities,
00:37:32 you would be running into contradictions,
00:37:34 which is in some sense what Gödel and Turing discovered
00:37:37 in response to Hilbert’s call.
00:37:39 So there are no infinities.
00:37:41 There are no infinities.
00:37:42 Infinities fake.
00:37:43 There is unboundedness, but if you have a language
00:37:45 that talks about infinity, at some point,
00:37:47 the language is going to contradict itself,
00:37:49 which means it’s no longer valid.
00:37:51 In order to deal with infinities and mathematics,
00:37:54 you have to postulate the existence initially.
00:37:57 You cannot construct the infinities.
00:37:59 And that’s an issue, right?
00:38:00 You cannot build up an infinity from zero.
00:38:02 But in practice, you never do this, right?
00:38:04 When you perform calculations,
00:38:06 you only look at the dynamics of too many parts to count.
00:38:09 And usually these numbers are not that large.
00:38:13 They’re not Googles or something.
00:38:14 The infinities that we are dealing with in our universe
00:38:18 are mathematically speaking, relatively small integers.
00:38:23 And still what we’re looking at is dynamics
00:38:26 where a trillion things behave similar
00:38:30 to a hundred trillion things
00:38:32 or something that is very, very large
00:38:37 because they’re converging.
00:38:39 And these convergent dynamics, these operators,
00:38:41 this is what we deal with when we are doing the geometry.
00:38:45 Geometry is stuff where we can pretend that it’s continuous
00:38:48 because if we subdivide the space sufficiently fine grained,
00:38:54 these things approach a certain dynamic.
00:38:56 And this approach dynamic, that is what we mean by it.
00:38:59 But I don’t think that infinity would work, so to speak,
00:39:02 that you would know the last digit of pi
00:39:05 and that you have a physical process
00:39:06 that rests on knowing the last digit of pi.
00:39:09 Yeah, that could be just a peculiar quirk
00:39:12 of human cognition that we like discrete.
00:39:15 Discrete makes sense to us.
00:39:16 Infinity doesn’t, so in terms of our intuitions.
00:39:19 No, the issue is that everything that we think about
00:39:22 needs to be expressed in some kind of mental language,
00:39:25 not necessarily natural language,
00:39:27 but some kind of mathematical language
00:39:29 that your neurons can speak
00:39:31 that refers to something in the world.
00:39:34 And what we have discovered
00:39:35 is that we cannot construct a notion of infinity
00:39:39 without running into contradictions,
00:39:40 which means that such a language is no longer valid.
00:39:43 And I suspect this is what made Pythagoras so unhappy
00:39:46 when somebody came up with the notion of irrational numbers
00:39:49 before it was time, right?
00:39:50 There’s this myth that he had this person killed
00:39:52 when he blabbed out the secret
00:39:54 that not everything can be expressed
00:39:55 as a ratio between two numbers,
00:39:57 but there are numbers between the ratios.
00:39:59 The world was not ready for this.
00:40:01 And I think he was right.
00:40:02 That has confused mathematicians very seriously
00:40:06 because these numbers are not values, they are functions.
00:40:09 And so you can calculate these functions
00:40:11 to a certain degree of approximation,
00:40:13 but you cannot pretend that pi has actually a value.
00:40:17 Pi is a function that would approach this value
00:40:20 to some degree,
00:40:21 but nothing in the world rests on knowing pi.
00:40:26 How important is this distinction
00:40:28 between discrete and continuous for you to get to the book?
00:40:32 Because there’s a, I mean, in discussion of your favorite
00:40:36 flavor of the theory of everything,
00:40:39 there’s a few on the table.
00:40:41 So there’s string theory, there’s a particular,
00:40:45 there’s a little quantum gravity,
00:40:48 which focused on one particular unification.
00:40:53 There’s just a bunch of favorite flavors
00:40:56 of different people trying to propose
00:40:59 a theory of everything.
00:41:01 Eric Weinstein and a bunch of people throughout history.
00:41:04 And then of course, Stephen Wolfram,
00:41:06 who I think is one of the only people doing a discrete.
00:41:10 No, no, there’s a bunch of physicists
00:41:12 who do this right now.
00:41:13 And like Toffoli and Tomasello.
00:41:17 And digital physics is something
00:41:21 that is, I think, growing in popularity.
00:41:24 But the main reason why this is interesting
00:41:29 is because it’s important sometimes to settle disagreements.
00:41:34 I don’t think that you need infinities at all,
00:41:36 and you never needed them.
00:41:38 You can always deal with very large numbers
00:41:40 and you can deal with limits, right?
00:41:42 We are fine with doing that.
00:41:43 You don’t need any kind of infinity.
00:41:45 You can build your computer algebra systems just as well
00:41:48 without believing in infinity in the first place.
00:41:50 So you’re okay with limits?
00:41:51 Yeah, so basically a limit means that something
00:41:54 is behaving pretty much the same
00:41:57 if you make the number large.
00:41:59 Right, because it’s converging to a certain value.
00:42:02 And at some point the difference becomes negligible
00:42:04 and you can no longer measure it.
00:42:06 And in this sense, you have things
00:42:08 that if you have an ngon which has enough corners,
00:42:12 then it’s going to behave like a circle at some point, right?
00:42:15 And it’s only going to be in some kind of esoteric thing
00:42:18 that cannot exist in the physical universe
00:42:21 that you would be talking about this perfect circle.
00:42:23 And now it turns out that it also wouldn’t work
00:42:25 in mathematics because you cannot construct mathematics
00:42:28 that has infinite resolution
00:42:30 without running into contradictions.
00:42:32 So that is itself not that important
00:42:35 because we never did that, right?
00:42:36 It’s just a thing that some people thought we could.
00:42:39 And this leads to confusion.
00:42:40 So for instance, Roger Penrose uses this as an argument
00:42:43 to say that there are certain things
00:42:46 that mathematicians can do dealing with infinities
00:42:50 and by extension our mind can do
00:42:53 that computers cannot do.
00:42:55 Yeah, he talks about that the human mind
00:42:58 can do certain mathematical things
00:43:00 that the computer as defined
00:43:02 by the universal Turing machine cannot.
00:43:06 Yes.
00:43:07 So that it has to do with infinity.
00:43:08 Yes, it’s one of the things.
00:43:10 So he is basically pointing at the fact
00:43:13 that there are things that are possible
00:43:15 in the mathematical mind and in pure mathematics
00:43:21 that are not possible in machines
00:43:24 that can be constructed in the physical universe.
00:43:27 And because he’s an honest guy,
00:43:29 he thinks this means that present physics
00:43:31 cannot explain operations that happen in our mind.
00:43:34 Do you think he’s right?
00:43:35 And so let’s leave his discussion
00:43:38 of consciousness aside for the moment.
00:43:40 Do you think he’s right about just
00:43:42 what he’s basically referring to as intelligence?
00:43:46 So is the human mind fundamentally more capable
00:43:50 as a thinking machine than a universal Turing machine?
00:43:53 No.
00:43:55 But so he’s suggesting that, right?
00:43:58 So our mind is actually less than a Turing machine.
00:44:01 There can be no Turing machine
00:44:02 because it’s defined as having an infinite tape.
00:44:05 And we always only have a finite tape.
00:44:07 But he’s saying it’s better.
00:44:08 Our minds can only perform finitely many operations.
00:44:10 Yes, he thinks so.
00:44:10 He’s saying it can do the kind of computation
00:44:13 that the Turing machine cannot.
00:44:14 And that’s because he thinks that our minds
00:44:16 can do operations that have infinite resolution
00:44:19 in some sense.
00:44:21 And I don’t think that’s the case.
00:44:23 Our minds are just able to discover these limit operators
00:44:26 over too many parts to count.
00:44:27 I see.
00:44:30 What about his idea that consciousness
00:44:32 is more than a computation?
00:44:37 So it’s more than something that a Turing machine can do.
00:44:42 So again, saying that there’s something special
00:44:44 about our mind that cannot be replicated in a machine.
00:44:49 The issue is that I don’t even know
00:44:51 how to construct a language to express
00:44:54 this statement correctly.
00:44:56 Well,
00:45:01 the basic statement is there’s a human experience
00:45:06 that includes intelligence, that includes self awareness,
00:45:09 that includes the hard problem of consciousness.
00:45:12 And the question is, can that be fully simulated
00:45:16 in the computer, in the mathematical model of the computer
00:45:20 as we understand it today?
00:45:23 Roger Penrose says no.
00:45:25 So the universe of Turing machine
00:45:30 cannot simulate the universe.
00:45:32 So the interesting question is,
00:45:34 and you have to ask him this is, why not?
00:45:36 What is this specific thing that cannot be modeled?
00:45:39 And when I looked at his writings
00:45:42 and I haven’t read all of it,
00:45:43 but when I read, for instance,
00:45:45 the section that he writes in the introduction
00:45:49 to a road to infinity,
00:45:51 the thing that he specifically refers to
00:45:53 is the way in which human minds deal with infinities.
00:45:57 And that itself can, I think, easily be deconstructed.
00:46:03 A lot of people feel that our experience
00:46:05 cannot be explained in a mechanical way.
00:46:08 And therefore it needs to be different.
00:46:11 And I concur, our experience is not mechanical.
00:46:14 Our experience is simulated.
00:46:16 It exists only in a simulation.
00:46:18 The only simulation can be conscious.
00:46:19 Physical systems cannot be conscious
00:46:21 because they’re only mechanical.
00:46:23 Cells cannot be conscious.
00:46:25 Neurons cannot be conscious.
00:46:26 Brains cannot be conscious.
00:46:27 People cannot be conscious
00:46:28 as far as if you understand them as physical systems.
00:46:31 What can be conscious is the story of the system
00:46:36 in the world where you write all these things
00:46:37 into the story.
00:46:39 You have experiences for the same reason
00:46:41 that a character novel has experiences
00:46:43 because it’s written into the story.
00:46:45 And now the system is acting on that story.
00:46:48 And it’s not a story that is written in a natural language.
00:46:50 It’s written in a perceptual language,
00:46:52 in this multimedia language of the game engine.
00:46:55 And in there, you write in what kind of experience you have
00:46:59 and what this means for the behavior of the system,
00:47:01 for your behavior tendencies, for your focus,
00:47:03 for your attention, for your experience of valence
00:47:05 and so on.
00:47:06 And this is being used to inform the behavior of the system
00:47:09 in the next step.
00:47:10 And then the story updates with the reactions of the system
00:47:15 and the changes in the world and so on.
00:47:17 And you live inside of that model.
00:47:19 You don’t live inside of the physical reality.
00:47:23 And I mean, just to linger on it, like you say, okay,
00:47:28 it’s in the perceptual language,
00:47:30 the multimodal perceptual language.
00:47:33 That’s the experience.
00:47:34 That’s what consciousness is within that model,
00:47:38 within that story.
00:47:40 But do you have agency?
00:47:43 When you play a video game, you can turn left
00:47:46 and you can turn right in that story.
00:47:49 So in that dream world, how much control do you have?
00:47:54 Is there such a thing as you in that story?
00:47:57 Like, is it right to say the main character,
00:48:00 you know, everybody’s NPCs,
00:48:02 and then there’s the main character
00:48:04 and you’re controlling the main character?
00:48:07 Or is that an illusion?
00:48:08 Is there a main character that you’re controlling?
00:48:10 I’m getting to the point of like the free will point.
00:48:14 Imagine that you are building a robot that plays soccer.
00:48:17 And you’ve been to MIT computer science,
00:48:19 you basically know how to do that, right?
00:48:22 And so you would say the robot is an agent
00:48:25 that solves a control problem,
00:48:27 how to get the ball into the goal.
00:48:29 And it needs to perceive the world
00:48:30 and the world is disturbing him in trying to do this, right?
00:48:33 So he has to control many variables to make that happen
00:48:35 and to project itself and the ball into the future
00:48:38 and understand its position on the field
00:48:40 relative to the ball and so on,
00:48:42 and the position of its limbs
00:48:44 or in the space around it and so on.
00:48:46 So it needs to have an adequate model
00:48:48 that abstracting reality in a useful way.
00:48:51 And you could say that this robot does have agency
00:48:55 over what it’s doing in some sense.
00:48:58 And the model is going to be a control model.
00:49:01 And inside of that control model,
00:49:03 you can possibly get to a point
00:49:05 where this thing is sufficiently abstract
00:49:07 to discover its own agency.
00:49:09 Our current robots don’t do that.
00:49:10 They don’t have a unified model of the universe,
00:49:13 but there’s not a reason why we shouldn’t be getting there
00:49:16 at some point in the not too distant future.
00:49:18 And once that happens,
00:49:20 you will notice that the robot tells a story
00:49:23 about a robot playing soccer.
00:49:25 So the robot will experience itself playing soccer
00:49:29 in a simulation of the world that it uses
00:49:32 to construct a model of the locations of its legs
00:49:35 and limbs in space on the field
00:49:38 with relationship to the ball.
00:49:39 And it’s not going to be at the level of the molecules.
00:49:42 It will be an abstraction that is exactly at the level
00:49:45 that is most suitable for past planning
00:49:47 of the movements of the robot.
00:49:49 It’s going to be a high level abstraction,
00:49:51 but a very useful one that is as predictive
00:49:53 as we can make it.
00:49:55 And in that side of that story,
00:49:56 there is a model of the agency of that system.
00:49:58 So this model can accurately predict
00:50:03 that the contents of the model
00:50:04 are going to be driving the behavior of the robot
00:50:07 in the immediate future.
00:50:08 But there’s the hard problem of consciousness,
00:50:12 which I would also,
00:50:14 there’s a subjective experience of free will as well
00:50:18 that I’m not sure where the robot gets that,
00:50:20 where that little leap is.
00:50:22 Because for me right now,
00:50:24 everything I imagine with that robot,
00:50:26 as it gets more and more and more sophisticated,
00:50:29 the agency comes from the programmer of the robot still,
00:50:33 of what was programmed in.
00:50:35 You could probably do an end to end learning system.
00:50:38 You maybe need to give it a few priors.
00:50:40 So you nudge the architecture in the right direction
00:50:42 that it converges more quickly,
00:50:44 but ultimately discovering the suitable hyperparameters
00:50:47 of the architecture is also only a search process.
00:50:50 And as the search process was evolution,
00:50:52 that has informed our brain architecture
00:50:55 so we can converge in a single lifetime
00:50:57 on useful interaction with the world
00:50:59 and the formation of a self model.
00:51:00 The problem is if we define hyperparameters broadly,
00:51:03 so it’s not just the parameters that control
00:51:06 this end to end learning system,
00:51:08 but the entirety of the design of the robot.
00:51:11 Like there’s, you have to remove the human completely
00:51:15 from the picture.
00:51:15 And then in order to build the robot,
00:51:17 you have to create an entire universe.
00:51:20 Cause you have to go, you can’t just shortcut evolution.
00:51:22 You have to go from the very beginning
00:51:24 in order for it to have,
00:51:25 cause I feel like there’s always a human
00:51:28 pulling the strings and that makes it seem like
00:51:32 the robot is cheating.
00:51:33 It’s getting a shortcut to consciousness.
00:51:35 And you are looking at the current Boston Dynamics robots.
00:51:38 It doesn’t look as if there is somebody
00:51:40 pulling the strings.
00:51:40 It doesn’t look like cheating anymore.
00:51:42 Okay, so let’s go there.
00:51:43 Cause I got to talk to you about this.
00:51:44 So obviously with the case of Boston Dynamics,
00:51:47 as you may or may not know,
00:51:49 it’s always either hard coded or remote controlled.
00:51:54 There’s no intelligence.
00:51:55 I don’t know how the current generation
00:51:57 of Boston Dynamics robots works,
00:51:59 but what I’ve been told about the previous ones
00:52:02 was that it’s basically all cybernetic control,
00:52:05 which means you still have feedback mechanisms and so on,
00:52:08 but it’s not deep learning for the most part
00:52:11 as it’s currently done.
00:52:13 It’s for the most part,
00:52:14 just identifying a control hierarchy
00:52:16 that is congruent to the limbs that exist
00:52:19 and the parameters that need to be optimized
00:52:21 for the movement of these limbs.
00:52:22 And then there is a convergence progress.
00:52:24 So it’s basically just regression
00:52:26 that you would need to control this.
00:52:27 But again, I don’t know whether that’s true.
00:52:29 That’s just what I’ve been told about how they work.
00:52:31 We have to separate several levels of discussion here.
00:52:35 So the only thing they do is pretty sophisticated control
00:52:39 with no machine learning
00:52:40 in order to maintain balance or to right itself.
00:52:45 It’s a control problem in terms of using the actuators
00:52:49 to when it’s pushed or when it steps on a thing
00:52:52 that’s uneven, how to always maintain balance.
00:52:55 And there’s a tricky set of heuristics around that,
00:52:57 but that’s the only goal.
00:53:00 Everything you see Boston Dynamics doing
00:53:02 in terms of that to us humans is compelling,
00:53:06 which is any kind of higher order movement,
00:53:09 like turning, wiggling its butt,
00:53:13 like jumping back on its two feet, dancing.
00:53:18 Dancing is even worse because dancing is hard coded in.
00:53:22 It’s choreographed by humans.
00:53:25 There’s choreography software.
00:53:27 So there is no, of all that high level movement,
00:53:30 there’s no anything that you can call,
00:53:34 certainly can’t call AI,
00:53:35 but there’s no even like basic heuristics.
00:53:39 It’s all hard coded in.
00:53:41 And yet we humans immediately project agency onto them,
00:53:47 which is fascinating.
00:53:48 So the gap here doesn’t necessarily have agency.
00:53:53 What it has is cybernetic control.
00:53:55 And the cybernetic control means you have a hierarchy
00:53:57 of feedback loops that keep the behavior
00:53:59 in certain boundaries so the robot doesn’t fall over
00:54:02 and it’s able to perform the movements.
00:54:04 And the choreography cannot really happen
00:54:06 with motion capture because the robot would fall over
00:54:09 because the physics of the robot,
00:54:10 the weight distribution and so on is different
00:54:12 from the weight distribution in the human body.
00:54:15 So if you were using the directly motion captured movements
00:54:19 of a human body to project it into this robot,
00:54:21 it wouldn’t work.
00:54:22 You can do this with a computer animation.
00:54:24 It will look a little bit off, but who cares?
00:54:26 But if you want to correct for the physics,
00:54:29 you need to basically tell the robot
00:54:31 where it should move its limbs.
00:54:33 And then the control algorithm is going
00:54:35 to approximate a solution that makes it possible
00:54:38 within the physics of the robot.
00:54:41 And you have to find the basic solution
00:54:43 for making that happen.
00:54:44 And there’s probably going to be some regression necessary
00:54:47 to get the control architecture to make these movements.
00:54:51 But those two layers are separate.
00:54:52 So the thing, the higher level instruction
00:54:56 of how you should move and where you should move
00:54:59 is a higher level.
00:54:59 Yeah, so I expect that the control level
00:55:01 of these robots at some level is dumb.
00:55:03 This is just the physical control movement,
00:55:06 the motor architecture.
00:55:07 But it’s a relatively smart motor architecture.
00:55:10 It’s just that there is no high level deliberation
00:55:12 about what decisions to make necessarily, right?
00:55:14 But see, it doesn’t feel like free will or consciousness.
00:55:17 No, no, that was not where I was trying to get to.
00:55:20 I think that in our own body, we have that too.
00:55:24 So we have a certain thing that is basically
00:55:26 just a cybernetic control architecture
00:55:29 that is moving our limbs.
00:55:31 And deep learning can help in discovering
00:55:34 such an architecture if you don’t have it
00:55:35 in the first place.
00:55:37 If you already know your hardware,
00:55:38 you can maybe handcraft it.
00:55:40 But if you don’t know your hardware,
00:55:41 you can search for such an architecture.
00:55:43 And this work already existed in the 80s and 90s.
00:55:46 People were starting to search for control architectures
00:55:49 by motor babbling and so on,
00:55:51 and just use reinforcement learning architectures
00:55:53 to discover such a thing.
00:55:55 And now imagine that you have
00:55:57 the cybernetic control architecture already inside of you.
00:56:01 And you extend this a little bit.
00:56:03 So you are seeking out food, for instance,
00:56:06 or rest or and so on.
00:56:08 And you get to have a baby at some point.
00:56:11 And now you add more and more control layers to this.
00:56:15 And the system is reverse engineering
00:56:17 its own control architecture
00:56:19 and builds a high level model to synchronize
00:56:22 the pursuit of very different conflicting goals.
00:56:26 And this is how I think you get to purposes.
00:56:28 Purposes are models of your goals.
00:56:30 The goals may be intrinsic
00:56:31 as the result of the different set point violations
00:56:33 that you have,
00:56:34 hunger and thirst for very different things,
00:56:37 and rest and pain avoidance and so on.
00:56:39 And you put all these things together
00:56:41 and eventually you need to come up with a strategy
00:56:44 to synchronize them all.
00:56:46 And you don’t need just to do this alone by yourself
00:56:49 because we are state building organisms.
00:56:51 We cannot function as isolation
00:56:53 the way that homo sapiens is set up.
00:56:55 So our own behavior only makes sense
00:56:58 when you zoom out very far into a society
00:57:00 or even into ecosystemic intelligence on the planet
00:57:04 and our place in it.
00:57:06 So the individual behavior only makes sense
00:57:08 in these larger contexts.
00:57:09 And we have a number of priors built into us.
00:57:11 So we are behaving as if we were acting
00:57:14 on these high level goals pretty much right from the start.
00:57:17 And eventually in the course of our life,
00:57:19 we can reverse engineer the goals that we’re acting on,
00:57:22 what actually are our higher level purposes.
00:57:25 And the more we understand that,
00:57:27 the more our behavior makes sense.
00:57:28 But this is all at this point,
00:57:30 complex stories within stories
00:57:32 that are driving our behavior.
00:57:34 Yeah, I just don’t know how big of a leap it is
00:57:38 to start create a system
00:57:40 that’s able to tell stories within stories.
00:57:44 Like how big of a leap that is
00:57:45 from where currently Boston Dynamics is
00:57:48 or any robot that’s operating in the physical space.
00:57:53 And that leap might be big
00:57:56 if it requires to solve the hard problem of consciousness,
00:57:59 which is telling a hell of a good story.
00:58:01 I suspect that consciousness itself is relatively simple.
00:58:05 What’s hard is perception
00:58:07 and the interface between perception and reasoning.
00:58:11 That’s for instance, the idea of the consciousness prior
00:58:14 that would be built into such a system by Yoshua Bengio.
00:58:18 And what he describes, and I think that’s accurate,
00:58:22 is that our own model of the world
00:58:27 can be described through something like an energy function.
00:58:29 The energy function is modeling the contradictions
00:58:32 that exist within the model at any given point.
00:58:34 And you try to minimize these contradictions,
00:58:36 the tangents in the model.
00:58:38 And to do this, you need to sometimes test things.
00:58:41 You need to conditionally disambiguate figure and ground.
00:58:43 You need to distinguish whether this is true
00:58:46 or that is true, and so on.
00:58:47 Eventually you get to an interpretation,
00:58:49 but you will need to manually depress a few points
00:58:52 in your model to let it snap into a state that makes sense.
00:58:55 And this function that tries to get the biggest dip
00:58:57 in the energy function in your model,
00:58:59 according to Yoshua Bengio, is related to consciousness.
00:59:02 It’s a low dimensional discrete function
00:59:04 that tries to maximize this dip in the energy function.
00:59:09 Yeah, I think I would need to dig into details
00:59:13 because I think the way he uses the word consciousness
00:59:15 is more akin to like self awareness,
00:59:17 like modeling yourself within the world,
00:59:20 as opposed to the subjective experience, the hard problem.
00:59:23 No, it’s not even the self is in the world.
00:59:26 The self is the agent and you don’t need to be aware
00:59:28 of yourself in order to be conscious.
00:59:31 The self is just a particular content that you can have,
00:59:34 but you don’t have to have.
00:59:35 But you can be conscious in, for instance, a dream at night
00:59:39 or during a meditation state where you don’t have a self.
00:59:42 Right.
00:59:43 Where you’re just aware of the fact that you are aware.
00:59:45 And what we mean by consciousness in the colloquial sense
00:59:49 is largely this reflexive self awareness,
00:59:53 that we become aware of the fact
00:59:55 that you’re paying attention,
00:59:57 that we are the thing that pays attention.
00:59:59 We are the thing that pays attention, right.
01:00:02 I don’t see where the awareness that we’re aware,
01:00:07 the hard problem doesn’t feel like it’s solved.
01:00:10 I mean, it’s called a hard problem for a reason,
01:00:14 because it seems like there needs to be a major leap.
01:00:19 Yeah, I think the major leap is to understand
01:00:21 how it is possible that a machine can dream,
01:00:25 that a physical system is able to create a representation
01:00:29 that the physical system is acting on,
01:00:31 and that is spun force and so on.
01:00:33 But once you accept the fact that you are not in physics,
01:00:36 but that you exist inside of the story,
01:00:39 I think the mystery disappears.
01:00:40 Everything is possible in the story.
01:00:41 You exist inside the story.
01:00:43 Okay, so the machine.
01:00:44 Your consciousness is being written into the story.
01:00:45 The fact that you experience things
01:00:47 is written to the side of the story.
01:00:48 You ask yourself, is this real what I’m seeing?
01:00:51 And your brain writes into the story, yes, it’s real.
01:00:53 So what about the perception of consciousness?
01:00:56 So to me, you look conscious.
01:00:59 So the illusion of consciousness,
01:01:02 the demonstration of consciousness.
01:01:04 I ask for the legged robot.
01:01:07 How do we make this legged robot conscious?
01:01:10 So there’s two things,
01:01:12 and maybe you can tell me if they’re neighboring ideas.
01:01:16 One is actually make it conscious,
01:01:18 and the other is make it appear conscious to others.
01:01:22 Are those related?
01:01:25 Let’s ask it from the other direction.
01:01:27 What would it take to make you not conscious?
01:01:31 So when you are thinking about how you perceive the world,
01:01:35 can you decide to switch from looking at qualia
01:01:39 to looking at representational states?
01:01:43 And it turns out you can.
01:01:44 There is a particular way in which you can look at the world
01:01:48 and recognize its machine nature, including your own.
01:01:51 And in that state,
01:01:52 you don’t have that conscious experience
01:01:54 in this way anymore.
01:01:55 It becomes apparent as a representation.
01:01:59 Everything becomes opaque.
01:02:01 And I think this thing that you recognize,
01:02:04 everything is a representation.
01:02:05 This is typically what we mean with enlightenment states.
01:02:09 And it can happen on the motivational level,
01:02:11 but you can also do this on the experiential level,
01:02:14 on the perceptual level.
01:02:16 See, but then I can come back to a conscious state.
01:02:20 Okay, I particularly,
01:02:23 I’m referring to the social aspect
01:02:26 that the demonstration of consciousness
01:02:30 is a really nice thing at a party
01:02:32 when you’re trying to meet a new person.
01:02:34 It’s a nice thing to know that they’re conscious
01:02:38 and they can,
01:02:41 I don’t know how fundamental consciousness
01:02:42 is in human interaction,
01:02:43 but it seems like to be at least an important part.
01:02:48 And I ask that in the same kind of way for robots.
01:02:53 In order to create a rich, compelling
01:02:56 human robot interaction,
01:02:58 it feels like there needs to be elements of consciousness
01:03:00 within that interaction.
01:03:02 My cat is obviously conscious.
01:03:04 And so my cat can do this party trick.
01:03:07 She also knows that I am conscious,
01:03:09 be able to have feedback about the fact
01:03:11 that we are both acting on models of our own awareness.
01:03:14 The question is how hard is it for the robot,
01:03:19 artificially created robot to achieve cat level
01:03:22 and party tricks?
01:03:24 Yes, so the issue for me is currently not so much
01:03:27 on how to build a system that creates a story
01:03:30 about a robot that lives in the world,
01:03:32 but to make an adequate representation of the world.
01:03:36 And the model that you and me have is a unified one.
01:03:40 It’s one where you basically make sense of everything
01:03:44 that you can perceive.
01:03:44 Every feature in the world that enters your perception
01:03:47 can be relationally mapped to a unified model of everything.
01:03:51 And we don’t have an AI that is able to construct
01:03:54 such a unified model yet.
01:03:56 So you need that unified model to do the party trick?
01:03:58 Yes, I think that it doesn’t make sense
01:04:01 if this thing is conscious,
01:04:03 but not in the same universe as you,
01:04:04 because you could not relate to each other.
01:04:06 So what’s the process, would you say,
01:04:08 of engineering consciousness in the machine?
01:04:12 Like what are the ideas here?
01:04:14 So you probably want to have some kind of perceptual system.
01:04:19 This perceptual system is a processing agent
01:04:21 that is able to track sensory data
01:04:23 and predict the next frame in the sensory data
01:04:26 from the previous frames of the sensory data
01:04:29 and the current state of the system.
01:04:31 So the current state of the system is, in perception,
01:04:34 instrumental to predicting what happens next.
01:04:37 And this means you build lots and lots of functions
01:04:39 that take all the blips that you feel on your skin
01:04:42 and that you see on your retina, or that you hear,
01:04:45 and puts them into a set of relationships
01:04:48 that allows you to predict what kind of sensory data,
01:04:51 what kind of sensor of blips, vector of blips,
01:04:53 you’re going to perceive in the next frame.
01:04:56 This is tuned and it’s constantly tuned
01:04:59 until it gets as accurate as it can.
01:05:01 You build a very accurate prediction mechanism
01:05:05 that is step one of the perception.
01:05:08 So first you predict, then you perceive
01:05:09 and see the error in your prediction.
01:05:11 And you have to do two things to make that happen.
01:05:13 One is you have to build a network of relationships
01:05:16 that are constraints,
01:05:18 that take all the variants in the world
01:05:21 and put each of the variances into a variable
01:05:24 that is connected with relationships to other variables.
01:05:27 And these relationships are computable functions
01:05:30 that constrain each other.
01:05:31 So when you see a nose
01:05:32 that points in a certain direction in space,
01:05:34 you have a constraint that says
01:05:36 there should be a face nearby that has the same direction.
01:05:39 And if that is not the case,
01:05:40 you have some kind of contradiction
01:05:41 that you need to resolve
01:05:42 because it’s probably not a nose what you’re looking at.
01:05:44 It just looks like one.
01:05:45 So you have to reinterpret the data
01:05:48 until you get to a point where your model converges.
01:05:52 And this process of making the sensory data
01:05:54 fit into your model structure
01:05:56 is what Piaget calls the assimilation.
01:06:01 And accommodation is the change of the models
01:06:04 where you change your model in such a way
01:06:05 that you can assimilate everything.
01:06:08 So you’re talking about building
01:06:09 a hell of an awesome perception system
01:06:12 that’s able to do prediction and perception
01:06:14 and correct and keep improving.
01:06:15 No, wait, that’s…
01:06:17 Wait, there’s more.
01:06:18 Yes, there’s more.
01:06:19 So the first thing that we wanted to do
01:06:21 is we want to minimize the contradictions in the model.
01:06:24 And of course, it’s very easy to make a model
01:06:26 in which you minimize the contradictions
01:06:28 just by allowing that it can be
01:06:29 in many, many possible states, right?
01:06:31 So if you increase degrees of freedom,
01:06:33 you will have fewer contradictions.
01:06:35 But you also want to reduce the degrees of freedom
01:06:37 because degrees of freedom mean uncertainty.
01:06:40 You want your model to reduce uncertainty
01:06:42 as much as possible,
01:06:44 but reducing uncertainty is expensive.
01:06:46 So you have to have a trade off
01:06:47 between minimizing contradictions
01:06:50 and reducing uncertainty.
01:06:52 And you have only a finite amount of compute
01:06:54 and experimental time and effort
01:06:57 available to reduce uncertainty in the world.
01:06:59 So you need to assign value to what you observe.
01:07:02 So you need some kind of motivational system
01:07:05 that is estimating what you should be looking at
01:07:07 and what you should be thinking about it,
01:07:09 how you should be applying your resources
01:07:10 to model what that is, right?
01:07:12 So you need to have something like convergence links
01:07:15 that tell you how to get from the present state
01:07:17 of the model to the next one.
01:07:19 You need to have these compatibility links
01:07:20 that tell you which constraints exist
01:07:23 and which constraint violations exist.
01:07:25 And you need to have some kind of motivational system
01:07:28 that tells you what to pay attention to.
01:07:30 So now we have a second agent next to the perceptual agent.
01:07:32 We have a motivational agent.
01:07:34 This is a cybernetic system
01:07:36 that is modeling what the system needs,
01:07:38 what’s important for the system,
01:07:40 and that interacts with the perceptual system
01:07:42 to maximize the expected reward.
01:07:44 And you’re saying the motivational system
01:07:46 is some kind of like, what is it?
01:07:49 A high level narrative over some lower level.
01:07:52 No, it’s just your brainstem stuff,
01:07:53 the limbic system stuff that tells you,
01:07:55 okay, now you should get something to eat
01:07:57 because I’ve just measured your blood sugar.
01:07:59 So you mean like motivational system,
01:08:00 like the lower level stuff, like hungry.
01:08:03 Yes, there’s basically physiological needs
01:08:05 and some cognitive needs and some social needs
01:08:07 and they all interact.
01:08:08 And they’re all implemented at different parts
01:08:10 in your nervous system as the motivational system.
01:08:12 But they’re basically cybernetic feedback loops.
01:08:14 It’s not that complicated.
01:08:16 It’s just a lot of code.
01:08:18 And so you now have a motivational agent
01:08:21 that makes your robot go for the ball
01:08:23 or that makes your worm go to eat food and so on.
01:08:27 And you have the perceptual system
01:08:29 that lets it predict the environment
01:08:30 so it’s able to solve that control problem to some degree.
01:08:33 And now what we learned is that it’s very hard
01:08:35 to build a machine learning system
01:08:37 that looks at all the data simultaneously
01:08:39 to see what kind of relationships
01:08:41 could exist between them.
01:08:43 So you need to selectively model the world.
01:08:45 You need to figure out where can I make the biggest difference
01:08:48 if I would put the following things together.
01:08:50 Sometimes you find a gradient for that.
01:08:53 When you have a gradient,
01:08:54 you don’t need to remember where you came from.
01:08:56 You just follow the gradient
01:08:57 until it doesn’t get any better.
01:08:59 But if you have a world where the problems are discontinuous
01:09:02 and the search spaces are discontinuous,
01:09:04 you need to retain memory of what you explored.
01:09:07 You need to construct a plan of what to explore next.
01:09:10 And this thing means that you have next
01:09:13 to this perceptual construction system
01:09:15 and the motivational cybernetics,
01:09:17 an agent that is paying attention
01:09:20 to what it should select at any given moment
01:09:22 to maximize reward.
01:09:24 And this scanning system, this attention agent,
01:09:27 is required for consciousness
01:09:28 and consciousness is its control model.
01:09:32 So it’s the index memories that this thing retains
01:09:36 when it manipulates the perceptual representations
01:09:39 to maximize the value and minimize the conflicts
01:09:43 and to increase coherence.
01:09:44 So the purpose of consciousness is to create coherence
01:09:47 in your perceptual representations,
01:09:49 remove conflicts, predict the future,
01:09:52 construct counterfactual representations
01:09:54 so you can coordinate your actions and so on.
01:09:57 And in order to do this, it needs to form memories.
01:10:00 These memories are partial binding states
01:10:02 of the working memory contents
01:10:04 that are being revisited later on to backtrack,
01:10:07 to undo certain states, to look for alternatives.
01:10:10 And these index memories that you can recall,
01:10:13 that is what you perceive as your stream of consciousness.
01:10:15 And being able to recall these memories,
01:10:17 this is what makes you conscious.
01:10:19 If you could not remember what you paid attention to,
01:10:21 you wouldn’t be conscious.
01:10:26 So consciousness is the index in the memory database.
01:10:29 Okay.
01:10:31 But let me sneak up to the questions of consciousness
01:10:35 a little further.
01:10:37 So we usually relate suffering to consciousness.
01:10:42 So the capacity to suffer.
01:10:46 I think to me, that’s a really strong sign of consciousness
01:10:49 is a thing that can suffer.
01:10:52 How is that useful?
01:10:55 Suffering.
01:10:57 And like in your model where you just described,
01:10:59 which is indexing of memories and what is the coherence
01:11:03 with the perception, with this predictive thing
01:11:07 that’s going on in the perception,
01:11:09 how does suffering relate to any of that?
01:11:13 The higher level suffering that humans do.
01:11:16 Basically pain is a reinforcement signal.
01:11:20 Pain is a signal that one part of your brain
01:11:23 sends to another part of your brain,
01:11:25 or in an abstract sense, part of your mind
01:11:27 sends to another part of the mind to regulate its behavior,
01:11:30 to tell it the behavior that you’re currently exhibiting
01:11:33 should be improved.
01:11:34 And this is the signal that I tell you to move away
01:11:39 from what you’re currently doing
01:11:40 and push into a different direction.
01:11:42 So pain gives you a part of you an impulse
01:11:46 to do something differently.
01:11:47 But sometimes this doesn’t work
01:11:49 because the training part of your brain
01:11:52 is talking to the wrong region,
01:11:54 or because it has the wrong model
01:11:55 of the relationships in the world.
01:11:57 Maybe you’re mismodeling yourself
01:11:58 or you’re mismodeling the relationship of yourself
01:12:00 to the world,
01:12:01 or you’re mismodeling the dynamics of the world.
01:12:03 So you’re trying to improve something
01:12:04 that cannot be improved by generating more pain.
01:12:07 But the system doesn’t have any alternative.
01:12:10 So it doesn’t get better.
01:12:12 What do you do if something doesn’t get better
01:12:14 and you want it to get better?
01:12:15 You increase the strengths of the signal.
01:12:17 And then the signal becomes chronic
01:12:19 when it becomes permanent without a change inside.
01:12:22 This is what we call suffering.
01:12:24 And the purpose of consciousness
01:12:26 is to deal with contradictions,
01:12:28 with things that cannot be resolved.
01:12:30 The purpose of consciousness,
01:12:31 I think is similar to a conductor in an orchestra.
01:12:35 When everything works well,
01:12:36 the orchestra doesn’t need much of a conductor
01:12:38 as long as it’s coherent.
01:12:40 But when there is a lack of coherence
01:12:42 or something is consistently producing
01:12:44 disharmony and mismatches,
01:12:46 then the conductor becomes alert and interacts with it.
01:12:48 So suffering attracts the activity of our consciousness.
01:12:52 And the purpose of that is ideally
01:12:54 that we bring new layers online,
01:12:56 new layers of modeling that are able to create
01:13:00 a model of the dysregulation so we can deal with it.
01:13:04 And this means that we typically get
01:13:06 higher level consciousness, so to speak, right?
01:13:08 We get some consciousness above our pay grade maybe
01:13:11 if we have some suffering early in our life.
01:13:13 Most of the interesting people
01:13:14 had trauma early on in their childhood.
01:13:17 And trauma means that you are suffering an injury
01:13:20 for which the system is not prepared,
01:13:23 which it cannot deal with,
01:13:24 which it cannot insulate itself from.
01:13:26 So something breaks.
01:13:27 And this means that the behavior of the system
01:13:29 is permanently disturbed in a way
01:13:34 that some mismatch exists now in the regulation
01:13:37 that just by following your impulses,
01:13:39 by following the pain in the direction where it hurts,
01:13:41 the situation doesn’t improve but get worse.
01:13:44 And so what needs to happen is that you grow up.
01:13:47 And that’s part that has grown up
01:13:49 is able to deal with the part
01:13:51 that is stuck in this earlier phase.
01:13:53 Yeah, so at least to grow,
01:13:54 so you’re adding extra layers to your cognition.
01:13:58 And let me ask you then,
01:14:00 because I gotta stick on suffering,
01:14:02 the ethics of the whole thing.
01:14:05 So not our consciousness, but the consciousness of others.
01:14:08 You’ve tweeted, one of my biggest fears
01:14:13 is that insects could be conscious.
01:14:16 The amount of suffering on earth would be unthinkable.
01:14:20 So when we think of other conscious beings,
01:14:24 is suffering a property of consciousness
01:14:30 that we’re most concerned about?
01:14:32 So I’m still thinking about robots,
01:14:40 how to make sense of other nonhuman things
01:14:44 that appear to have the depth of experience
01:14:48 that humans have.
01:14:50 And to me, that means consciousness
01:14:54 and the darkest side of that, which is suffering,
01:14:57 the capacity to suffer.
01:15:00 And so I started thinking,
01:15:02 how much responsibility do we have
01:15:04 for those other conscious beings?
01:15:06 That’s where the definition of consciousness
01:15:10 becomes most urgent.
01:15:13 Like having to come up with a definition of consciousness
01:15:15 becomes most urgent,
01:15:16 is who should we and should we not be torturing?
01:15:24 There’s no general answer to this.
01:15:26 Was Genghis Khan doing anything wrong?
01:15:29 It depends right on how you look at it.
01:15:31 Well, he drew a line somewhere
01:15:36 where this is us and that’s them.
01:15:38 It’s the circle of empathy.
01:15:40 It’s like these,
01:15:42 you don’t have to use the word consciousness,
01:15:44 but these are the things that matter to me
01:15:48 if they suffer or not.
01:15:50 And these are the things that don’t matter to him.
01:15:52 Yeah, but when one of his commanders failed him,
01:15:54 he broke his spine and let him die in a horrible way.
01:15:59 And so in some sense,
01:16:01 I think he was indifferent to suffering
01:16:03 or he was not different in the sense
01:16:05 that he didn’t see it as useful if he inflicted suffering,
01:16:10 but he did not see it as something that had to be avoided.
01:16:14 That was not the goal.
01:16:15 The question was, how can I use suffering
01:16:18 and the infliction of suffering to reach my goals
01:16:21 from his perspective?
01:16:23 I see.
01:16:24 So like different societies throughout history
01:16:26 put different value on the…
01:16:29 Different individuals, different psyches.
01:16:31 But also even the objective of avoiding suffering,
01:16:35 like some societies probably,
01:16:37 I mean, this is where like religious belief really helps
01:16:40 that afterlife, that it doesn’t matter
01:16:44 that you suffer or die,
01:16:45 what matters is you suffer honorably, right?
01:16:49 So that you enter the afterlife as a hero.
01:16:52 It seems to be superstitious to me,
01:16:53 basically beliefs that assert things
01:16:57 for which no evidence exists
01:17:00 are incompatible with sound epistemology.
01:17:02 And I don’t think that religion has to be superstitious,
01:17:04 otherwise it should be condemned in all cases.
01:17:06 You’re somebody who’s saying we live in a dream world,
01:17:09 we have zero evidence for anything.
01:17:11 So…
01:17:12 That’s not the case.
01:17:13 There are limits to what languages can be constructed.
01:17:16 Mathematics brings solid evidence for its own structure.
01:17:19 And once we have some idea of what languages exist
01:17:23 and how a system can learn
01:17:24 and what learning itself is in the first place.
01:17:26 And so we can begin to realize that our intuitions
01:17:31 that we are able to learn about the regularities
01:17:34 of the world and minimize surprise
01:17:36 and understand the nature of our own agency
01:17:38 to some degree of abstraction.
01:17:40 That’s not an illusion.
01:17:42 So it’s a useful approximation.
01:17:44 Just because we live in a dream world
01:17:46 doesn’t mean mathematics can’t give us a consistent glimpse
01:17:51 of physical, of objective reality.
01:17:54 We can basically distinguish useful encodings
01:17:57 from useless encodings.
01:17:58 And when we apply our truth seeking to the world,
01:18:03 we know we usually cannot find out
01:18:05 whether a certain thing is true.
01:18:07 What we typically do is we take the state vector
01:18:10 of the universe separated into separate objects
01:18:12 that interact with each other through interfaces.
01:18:14 And this distinction that we are making
01:18:16 is not completely arbitrary.
01:18:17 It’s done to optimize the compression
01:18:21 that we can apply to our models of the universe.
01:18:23 So we can predict what’s happening
01:18:25 with our limited resources.
01:18:27 In this sense, it’s not arbitrary.
01:18:29 But the separation of the world into objects
01:18:32 that are somehow discrete and interacting with each other
01:18:34 is not the true reality, right?
01:18:36 The boundaries between the objects
01:18:38 are projected into the world, not arbitrarily projected.
01:18:41 But still, it’s only an approximation
01:18:44 of what’s actually the case.
01:18:46 And we sometimes notice that we run into contradictions
01:18:48 when we try to understand high level things
01:18:50 like economic aspects of the world
01:18:53 and so on, or political aspects, or psychological aspects
01:18:56 where we make simplifications.
01:18:58 And the objects that we are using to separate the world
01:19:00 are just one of many possible projections
01:19:03 of what’s going on.
01:19:04 So it’s not, in this postmodernist sense,
01:19:07 completely arbitrary, and you’re free to pick
01:19:09 what you want or dismiss what you don’t like
01:19:11 because it’s all stories.
01:19:12 No, that’s not true.
01:19:13 You have to show for every model
01:19:15 of how well it predicts the world.
01:19:17 So the confidence that you should have
01:19:19 in the entities of your models
01:19:21 should correspond to the evidence that you have.
01:19:24 Can I ask you on a small tangent
01:19:27 to talk about your favorite set of ideas and people,
01:19:32 which is postmodernism.
01:19:35 What?
01:19:37 What is postmodernism?
01:19:39 How would you define it?
01:19:40 And why to you is it not a useful framework of thought?
01:19:48 Postmodernism is something that I’m really not an expert on.
01:19:52 And postmodernism is a set of philosophical ideas
01:19:57 that is difficult to lump together,
01:19:58 that is characterized by some useful thinkers,
01:20:01 some of them poststructuralists and so on.
01:20:04 And I’m mostly not interested in it
01:20:05 because I think that it’s not leading me anywhere
01:20:08 that I find particularly useful.
01:20:11 It’s mostly, I think, born out of the insight
01:20:13 that the ontologies that we impose on the world
01:20:17 are not literally true.
01:20:18 And that we can often get to a different interpretation
01:20:20 by the world by using a different ontology
01:20:22 that is different separation of the world
01:20:25 into interacting objects.
01:20:26 But the idea that this makes the world a set of stories
01:20:30 that are arbitrary, I think, is wrong.
01:20:33 And the people that are engaging in this type of philosophy
01:20:37 are working in an area that I largely don’t find productive.
01:20:40 There’s nothing useful coming out of this.
01:20:43 So this idea that truth is relative
01:20:45 is not something that has, in some sense,
01:20:46 informed physics or theory of relativity.
01:20:49 And there is no feedback between those.
01:20:51 There is no meaningful information
01:20:54 of this type of philosophy on the sciences
01:20:56 or on engineering or in politics.
01:20:59 But there is a very strong information on ideology
01:21:04 because it basically has become an ideology
01:21:07 that is justifying itself by the notion
01:21:11 that truth is a relative concept.
01:21:13 And it’s not being used in such a way
01:21:15 that the philosophers or sociologists
01:21:18 that take up these ideas say,
01:21:20 oh, I should doubt my own ideas because maybe my separation of the world
01:21:24 into objects is not completely valid.
01:21:25 And I should maybe use a different one
01:21:27 and be open to a pluralism of ideas.
01:21:30 But it mostly exists to dismiss the ideas of other people.
01:21:34 It becomes, yeah, it becomes a political weapon of sorts
01:21:37 to achieve power.
01:21:39 Basically, there’s nothing wrong, I think,
01:21:42 with developing a philosophy around this.
01:21:46 But to develop a philosophy around this,
01:21:49 to develop norms around the idea
01:21:51 that truth is something that is completely negotiable,
01:21:54 is incompatible with the scientific project.
01:21:57 And I think if the academia has no defense
01:22:02 against the ideological parts of the postmodernist movement,
01:22:06 it’s doomed.
01:22:07 Right, you have to acknowledge the ideological part
01:22:11 of any movement, actually, including postmodernism.
01:22:15 Well, the question is what an ideology is.
01:22:17 And to me, an ideology is basically a viral memeplex
01:22:21 that is changing your mind in such a way that reality gets warped.
01:22:25 It gets warped in such a way that you’re being cut off
01:22:28 from the rest of human thought space.
01:22:29 And you cannot consider things outside of the range of ideas
01:22:33 of your own ideology as possibly true.
01:22:35 Right, so, I mean, there’s certain properties to an ideology
01:22:38 that make it harmful.
01:22:39 One of them is that dogmatism of just certainty,
01:22:44 dogged certainty in that you’re right,
01:22:46 you have the truth, and nobody else does.
01:22:48 Yeah, but what is creating the certainty?
01:22:50 It’s very interesting to look at the type of model
01:22:53 that is being produced.
01:22:54 Is it basically just a strong prior, and you tell people,
01:22:57 oh, this idea that you consider to be very true,
01:22:59 the evidence for this is actually just much weaker
01:23:02 than you thought, and look, here are some studies.
01:23:04 No, this is not how it works.
01:23:06 It’s usually normative, which means some thoughts
01:23:09 are unthinkable because they would change your identity
01:23:13 into something that is no longer acceptable.
01:23:17 And this cuts you off from considering an alternative.
01:23:20 And many de facto religions use this trick
01:23:23 to lock people into a certain mode of thought,
01:23:25 and this removes agency over your own thoughts.
01:23:27 And it’s very ugly to me.
01:23:28 It’s basically not just a process of domestication,
01:23:32 but it’s actually an intellectual castration
01:23:35 that happens.
01:23:36 It’s an inability to think creatively
01:23:39 and to bring forth new thoughts.
01:23:40 I can ask you about substances, chemical substances
01:23:48 that affect the video game, the dream world.
01:23:53 So psychedelics that increasingly have been getting
01:23:57 a lot of research done on them.
01:23:58 So in general, psychedelics, psilocybin, MDMA,
01:24:02 but also a really interesting one, the big one, which is DMT.
01:24:06 What and where are the places that these substances
01:24:10 take the mind that is operating in the dream world?
01:24:16 Do you have an interesting sense how this throws a wrinkle
01:24:20 into the prediction model?
01:24:22 Is it just some weird little quirk
01:24:24 or is there some fundamental expansion
01:24:27 of the mind going on?
01:24:31 I suspect that a way to look at psychedelics
01:24:34 is that they induce particular types
01:24:36 of lucid dreaming states.
01:24:38 So it’s a state in which certain connections
01:24:41 are being severed in your mind.
01:24:43 They’re no longer active.
01:24:45 Your mind basically gets free to move in a certain direction
01:24:48 because some inhibition, some particular inhibition
01:24:51 doesn’t work anymore.
01:24:52 And as a result, you might stop having a self
01:24:55 or you might stop perceiving the world as three dimensional.
01:25:00 And you can explore that state.
01:25:04 And I suppose that for every state
01:25:06 that can be induced with psychedelics,
01:25:08 there are people that are naturally in that state.
01:25:10 So sometimes psychedelics to shift you
01:25:13 through a range of possible mental states.
01:25:15 And they can also shift you out of the range
01:25:17 of permissible mental states
01:25:19 that is where you can make predictive models of reality.
01:25:22 And what I observe in people that use psychedelics a lot
01:25:26 is that they tend to be overfitting.
01:25:29 Overfitting means that you are using more bits
01:25:34 for modeling the dynamics of a function than you should.
01:25:38 And so you can fit your curve
01:25:40 to extremely detailed things in the past,
01:25:42 but this model is no longer predictive for the future.
01:25:45 What is it about psychedelics that forces that?
01:25:49 I thought it would be the opposite.
01:25:51 I thought that it’s a good mechanism
01:25:54 for generalization, for regularization.
01:25:59 So it feels like psychedelics expansion of the mind,
01:26:03 like taking you outside of,
01:26:04 like forcing your model to be non predictive
01:26:08 is a good thing.
01:26:11 Meaning like, it’s almost like, okay,
01:26:14 what I would say psychedelics are akin to
01:26:16 is traveling to a totally different environment.
01:26:19 Like going, if you’ve never been to like India
01:26:21 or something like that from the United States,
01:26:24 very different set of people, different culture,
01:26:26 different food, different roads and values
01:26:30 and all those kinds of things.
01:26:31 Yeah, so psychedelics can, for instance,
01:26:33 teleport people into a universe that is hyperbolic,
01:26:37 which means that if you imagine a room that you’re in,
01:26:41 you can turn around 360 degrees
01:26:43 and you didn’t go full circle.
01:26:44 You need to go 720 degrees to go full circle.
01:26:47 Exactly.
01:26:48 So the things that people learn in that state
01:26:50 cannot be easily transferred
01:26:52 in this universe that we are in.
01:26:54 It could be that if they’re able to abstract
01:26:56 and understand what happened to them,
01:26:58 that they understand that some part
01:27:00 of their spatial cognition has been desynchronized
01:27:03 and has found a different synchronization.
01:27:05 And this different synchronization
01:27:06 happens to be a hyperbolic one, right?
01:27:08 So you learn something interesting about your brain.
01:27:10 It’s difficult to understand what exactly happened,
01:27:13 but we get a pretty good idea
01:27:14 once we understand how the brain is representing geometry.
01:27:17 Yeah, but doesn’t it give you a fresh perspective
01:27:20 on the physical reality?
01:27:26 Who’s making that sound?
01:27:27 Is it inside my head or is it external?
01:27:30 Well, there is no sound outside of your mind,
01:27:33 but it’s making sense of phenomenon physics.
01:27:39 Yeah, in the physical reality, there’s sound waves
01:27:44 traveling through air.
01:27:45 Okay.
01:27:47 That’s our model of what happened.
01:27:48 That’s our model of what happened, right.
01:27:53 Doesn’t Psychedelics give you a fresh perspective
01:27:57 on this physical reality?
01:27:59 Like, not this physical reality, but this more…
01:28:05 What do you call the dream world that’s mapped directly to…
01:28:09 The purpose of dreaming at night, I think,
01:28:11 is data augmentation.
01:28:13 Exactly.
01:28:14 So that’s very different.
01:28:16 That’s very similar to Psychedelics.
01:28:18 It’s changed parameters about the things that you have learned.
01:28:21 And, for instance, when you are young,
01:28:24 you have seen things from certain perspectives,
01:28:26 but not from others.
01:28:27 So your brain is generating new perspectives of objects
01:28:30 that you already know,
01:28:31 which means you can learn to recognize them later
01:28:34 from different perspectives.
01:28:35 And I suspect that’s the reason that many of us
01:28:37 remember to have flying dreams as children,
01:28:39 because it’s just different perspectives of the world
01:28:41 that you already know,
01:28:43 and that it starts to generate these different
01:28:46 perspective changes,
01:28:47 and then it fluidly turns this into a flying dream
01:28:50 to make sense of what’s happening, right?
01:28:52 So you fill in the gaps,
01:28:53 and suddenly you see yourself flying.
01:28:55 And similar things can happen with semantic relationships.
01:28:58 So it’s not just spatial relationships,
01:29:00 but it can also be the relationships between ideas
01:29:03 that are being changed.
01:29:05 And it seems that the mechanisms that make that happen
01:29:08 during dreaming are interacting
01:29:12 with these same receptors
01:29:14 that are being stimulated by psychedelics.
01:29:17 So I suspect that there is a thing
01:29:19 that I haven’t read really about.
01:29:22 The way in which dreams are induced in the brain
01:29:24 is not just that the activity of the brain gets tuned down
01:29:28 because your eyes are closed
01:29:30 and you no longer get enough data from your eyes,
01:29:33 but there is a particular type of neurotransmitter
01:29:37 that is saturating your brain during these phases,
01:29:40 during the REM phases, and you produce
01:29:42 controlled hallucinations.
01:29:44 And psychedelics are linking into these mechanisms,
01:29:48 I suspect.
01:29:49 So isn’t that another trickier form of data augmentation?
01:29:54 Yes, but it’s also data augmentation
01:29:57 that can happen outside of the specification
01:29:59 that your brain is tuned to.
01:30:00 So basically people are overclocking their brains
01:30:03 and that produces states
01:30:05 that are subjectively extremely interesting.
01:30:09 Yeah, I just.
01:30:10 But from the outside, very suspicious.
01:30:12 So I think I’m over applying the metaphor
01:30:15 of a neural network in my own mind,
01:30:17 which I just think that doesn’t lead to overfitting, right?
01:30:22 But you were just sort of anecdotally saying
01:30:26 my experiences with people that have done psychedelics
01:30:28 are that kind of quality.
01:30:30 I think it typically happens.
01:30:31 So if you look at people like Timothy Leary,
01:30:34 and he has written beautiful manifestos
01:30:36 about the effect of LSD on people.
01:30:40 He genuinely believed, he writes in these manifestos,
01:30:42 that in the future, science and art
01:30:44 will only be done on psychedelics
01:30:46 because it’s so much more efficient and so much better.
01:30:49 And he gave LSD to children in this community
01:30:52 of a few thousand people that he had near San Francisco.
01:30:55 And basically he was losing touch with reality.
01:31:00 He did not understand the effects
01:31:02 that the things that he was doing
01:31:04 would have on the reception of psychedelics
01:31:06 by society because he was unable to think critically
01:31:09 about what happened.
01:31:10 What happened was that he got in a euphoric state,
01:31:13 that euphoric state happened because he was overfitting.
01:31:16 He was taking this sense of euphoria
01:31:19 and translating it into a model
01:31:21 of actual success in the world, right?
01:31:23 He was feeling better.
01:31:25 Limitations had disappeared,
01:31:26 that he experienced to be existing,
01:31:29 but he didn’t get superpowers.
01:31:30 I understand what you mean by overfitting now.
01:31:33 There’s a lot of interpretation to the term
01:31:36 overfitting in this case, but I got you.
01:31:38 So he was getting positive rewards
01:31:42 from a lot of actions that he shouldn’t have been doing.
01:31:44 Yeah, but not just this.
01:31:45 So if you take, for instance, John Lilly,
01:31:46 who was studying dolphin languages and aliens and so on,
01:31:52 a lot of people that use psychedelics became very loopy.
01:31:55 And the typical thing that you notice
01:31:58 when people are on psychedelics is that they are in a state
01:32:00 where they feel that everything can be explained now.
01:32:03 Everything is clear, everything is obvious.
01:32:06 And sometimes they have indeed discovered
01:32:09 a useful connection, but not always.
01:32:12 Very often these connections are overinterpretations.
01:32:15 I wonder, you know, there’s a question
01:32:17 of correlation versus causation.
01:32:21 And also I wonder if it’s the psychedelics
01:32:23 or if it’s more the social, like being the outsider
01:32:28 and having a strong community of outside
01:32:31 and having a leadership position in an outsider cult
01:32:34 like community that could have a much stronger effect
01:32:37 of overfitting than do psychedelics themselves,
01:32:39 the actual substances, because it’s a counterculture thing.
01:32:43 So it could be that as opposed to the actual substance.
01:32:46 If you’re a boring person who wears a suit and tie
01:32:49 and works at a bank and takes psychedelics,
01:32:53 that could be a very different effect
01:32:55 of psychedelics on your mind.
01:32:57 I’m just sort of raising the point
01:32:59 that the people you referenced are already weirdos.
01:33:02 I’m not sure exactly.
01:33:04 No, not necessarily.
01:33:05 A lot of the people that tell me
01:33:07 that they use psychedelics in a useful way
01:33:10 started out as squares and were liberating themselves
01:33:14 because they were stuck.
01:33:16 They were basically stuck in local optimum
01:33:17 of their own self model, of their relationship to the world.
01:33:20 And suddenly they had data augmentation.
01:33:23 They basically saw and experienced a space of possibilities.
01:33:26 They experienced what it would be like to be another person.
01:33:29 And they took important lessons
01:33:32 from that experience back home.
01:33:36 Yeah, I mean, I love the metaphor of data augmentation
01:33:40 because that’s been the primary driver
01:33:44 of self supervised learning in the computer vision domain
01:33:48 is data augmentation.
01:33:50 So it’s funny to think of data augmentation,
01:33:53 like chemically induced data augmentation in the human mind.
01:33:58 There’s also a very interesting effect that I noticed.
01:34:02 I know several people who are sphere to me
01:34:06 that LSD has cured their migraines.
01:34:09 So severe cluster headaches or migraines
01:34:13 that didn’t respond to standard medication
01:34:15 that disappeared after a single dose.
01:34:18 And I don’t recommend anybody doing this,
01:34:20 especially not in the US where it’s illegal.
01:34:23 And there are no studies on this for that reason.
01:34:26 But it seems that anecdotally
01:34:28 that it basically can reset the serotonergic system.
01:34:33 So it’s basically pushing them
01:34:36 outside of their normal boundaries.
01:34:38 And as a result, it needs to find a new equilibrium.
01:34:41 And in some people that equilibrium is better,
01:34:43 but it also follows that in other people it might be worse.
01:34:46 So if you have a brain that is already teetering
01:34:49 on the boundary to psychosis,
01:34:51 it can be permanently pushed over that boundary.
01:34:54 Well, that’s why you have to do good science,
01:34:56 which they’re starting to do on all these different
01:34:58 substances of how well it actually works
01:34:59 for the different conditions like MDMA seems to help
01:35:02 with PTSD, same with psilocybin.
01:35:05 You need to do good science,
01:35:08 meaning large studies of large N.
01:35:10 Yeah, so based on the existing studies of MDMA,
01:35:14 it seems that if you look at Rick Doblin’s work
01:35:17 and what he has published about this and talks about,
01:35:20 MDMA seems to be a psychologically relatively safe drug.
01:35:24 But it’s physiologically not very safe.
01:35:26 That is, there is neurotoxicity
01:35:30 if you would use a too large dose.
01:35:31 And if you combine this with alcohol,
01:35:34 which a lot of kids do in party settings during raves
01:35:37 and so on, it’s very hepatotoxic.
01:35:40 So basically you can kill your liver.
01:35:42 And this means that it’s probably something that is best
01:35:45 and most productively used in a clinical setting
01:35:48 by people who really know what they’re doing.
01:35:50 And I suspect that’s also true for the other psychedelics
01:35:53 that is while the other psychedelics are probably not
01:35:56 as toxic as say alcohol,
01:35:59 the effects on the psyche can be much more profound
01:36:02 and lasting.
01:36:03 Yeah, well, as far as I know psilocybin,
01:36:05 so mushrooms, magic mushrooms,
01:36:09 as far as I know in terms of the studies they’re running,
01:36:11 I think have no, like they’re allowed to do
01:36:15 what they’re calling heroic doses.
01:36:17 So that one does not have a toxicity.
01:36:18 So they could do like huge doses in a clinical setting
01:36:21 when they’re doing study on psilocybin,
01:36:23 which is kind of fun.
01:36:25 Yeah, it seems that most of the psychedelics
01:36:27 work in extremely small doses,
01:36:29 which means that the effect on the rest of the body
01:36:32 is relatively low.
01:36:33 And MDMA is probably the exception.
01:36:36 Maybe ketamine can be dangerous in larger doses
01:36:38 because it can depress breathing and so on.
01:36:41 But the LSD and psilocybin work in very, very small doses,
01:36:45 at least the active part of them,
01:36:47 of psilocybin LSD is only the active part.
01:36:50 And the, but the effect that it can have
01:36:54 on your mental wiring can be very dangerous, I think.
01:36:57 Let’s talk about AI a little bit.
01:37:00 What are your thoughts about GPT3 and language models
01:37:05 trained with self supervised learning?
01:37:09 It came out quite a bit ago,
01:37:11 but I wanted to get your thoughts on it.
01:37:13 Yeah.
01:37:14 In the nineties, I was in New Zealand
01:37:16 and I had an amazing professor, Ian Witten,
01:37:21 who realized I was bored in class and put me in his lab.
01:37:25 And he gave me the task to discover grammatical structure
01:37:28 in an unknown language.
01:37:31 And the unknown language that I picked was English
01:37:33 because it was the easiest one
01:37:35 to find a corpus for construct one.
01:37:37 And he gave me the largest computer at the whole university.
01:37:41 It had two gigabytes of RAM, which was amazing.
01:37:44 And I wrote everything in C
01:37:45 with some in memory compression to do statistics
01:37:47 over the language.
01:37:49 And I first would create a dictionary of all the words,
01:37:53 which basically tokenizes everything and compresses things
01:37:57 so that I don’t need to store the whole word,
01:37:58 but just a code for every word.
01:38:02 And then I was taking this all apart in sentences
01:38:05 and I was trying to find all the relationships
01:38:09 between all the words in the sentences
01:38:10 and do statistics over them.
01:38:12 And that proved to be impossible
01:38:15 because the complexity is just too large.
01:38:18 So if you want to discover the relationship
01:38:20 between an article and a noun,
01:38:21 and there are three adjectives in between,
01:38:23 you cannot do ngram statistics
01:38:25 and look at all the possibilities that can exist,
01:38:28 at least not with the resources that we had back then.
01:38:30 So I realized I need to make some statistics
01:38:33 over what I need to make statistics over.
01:38:35 So I wrote something that was pretty much a hack
01:38:38 that did this for at least first order relationships.
01:38:42 And I came up with some kind of mutual information graph
01:38:45 that was indeed discovering something that looks exactly
01:38:48 like the grammatical structure of the sentence,
01:38:50 just by trying to encode the sentence
01:38:52 in such a way that the words would be written
01:38:54 in the optimal order inside of the model.
01:38:58 And what I also found is that if we would be able
01:39:02 to increase the resolution of that
01:39:03 and not just use this model
01:39:06 to reproduce grammatically correct sentences,
01:39:09 we would also be able
01:39:09 to correct stylistically correct sentences
01:39:12 by just having more bits in these relationships.
01:39:14 And if we wanted to have meaning,
01:39:16 we would have to go much higher order.
01:39:18 And I didn’t know how to make higher order models back then
01:39:21 without spending way more years in research
01:39:23 on how to make the statistics
01:39:25 over what we need to make statistics over.
01:39:28 And this thing that we cannot look at the relationships
01:39:31 between all the bits in your input is being solved
01:39:34 in different domains in different ways.
01:39:35 So in computer graphics, computer vision,
01:39:39 standard methods for many years now
01:39:41 is convolutional neural networks.
01:39:43 Convolutional neural networks are hierarchies of filters
01:39:46 that exploit the fact that neighboring pixels
01:39:48 in images are usually semantically related
01:39:51 and distance pixels in images
01:39:53 are usually not semantically related.
01:39:55 So you can just by grouping the pixels
01:39:57 that are next to each other,
01:39:59 hierarchically together reconstruct the shape of objects.
01:40:02 And this is an important prior
01:40:04 that we built into these models
01:40:06 so they can converge quickly.
01:40:08 But this doesn’t work in language
01:40:09 for the reason that adjacent words are often
01:40:12 but not always related and distant words
01:40:14 are sometimes related while the words in between are not.
01:40:19 So how can you learn the topology of language?
01:40:22 And I think for this reason that this difficulty existed,
01:40:26 the transformer was invented
01:40:28 in natural language processing, not in vision.
01:40:32 And what the transformer is doing,
01:40:34 it’s a hierarchy of layers where every layer learns
01:40:38 what to pay attention to in the given context
01:40:40 in the previous layer.
01:40:43 So what to make the statistics over.
01:40:46 And the context is significantly larger
01:40:49 than the adjacent word.
01:40:51 Yes, so the context that GPT3 has been using,
01:40:55 the transformer itself is from 2017
01:40:58 and it wasn’t using that large of a context.
01:41:02 OpenAI has basically scaled up this idea
01:41:05 as far as they could at the time.
01:41:06 And the context is about 2048 symbols,
01:41:11 tokens in the language.
01:41:12 These symbols are not characters,
01:41:15 but they take the words and project them
01:41:17 into a vector space where words
01:41:20 that are statistically co occurring a lot
01:41:22 are neighbors already.
01:41:23 So it’s already a simplification
01:41:24 of the problem a little bit.
01:41:26 And so every word is basically a set of coordinates
01:41:29 in a high dimensional space.
01:41:31 And then they use some kind of trick
01:41:33 to also encode the order of the words in a sentence
01:41:36 or in the not just sentence,
01:41:37 but 2048 tokens is about a couple of pages of text
01:41:41 or two and a half pages of text.
01:41:43 And so they managed to do pretty exhaustive statistics
01:41:46 over the potential relationships
01:41:49 between two pages of text, which is tremendous.
01:41:51 I was just using a single sentence back then.
01:41:55 And I was only looking for first order relationships.
01:41:58 And they were really looking
01:42:00 for much, much higher level relationships.
01:42:02 And what they discover after they fed this
01:42:05 with an enormous amount of training,
01:42:06 they are pretty much the written internet
01:42:08 or a subset of it that had some quality,
01:42:12 but substantial portion of the common core
01:42:15 that they’re not only able to reproduce style,
01:42:18 but they’re also able to reproduce
01:42:19 some pretty detailed semantics,
01:42:21 like being able to add three digit numbers
01:42:24 and multiply two digit numbers
01:42:26 or to translate between programming languages
01:42:28 and things like that.
01:42:30 So the results that GPT3 got, I think were amazing.
01:42:34 By the way, I actually didn’t check carefully.
01:42:38 It’s funny you just mentioned
01:42:40 how you coupled semantics to the multiplication.
01:42:42 Is it able to do some basic math on two digit numbers?
01:42:46 Yes.
01:42:47 Okay, interesting.
01:42:48 I thought there’s a lot of failure cases.
01:42:53 Yeah, it basically fails if you take larger digit numbers.
01:42:56 So four digit numbers and so on makes carrying mistakes
01:42:59 and so on.
01:43:00 And if you take larger numbers,
01:43:02 you don’t get useful results at all.
01:43:04 And this could be an issue of the training set
01:43:09 where there are not many examples
01:43:10 of successful long form addition
01:43:13 and standard human written text.
01:43:15 And humans aren’t very good
01:43:16 at doing three digit numbers either.
01:43:19 Yeah, you’re not writing a lot about it.
01:43:22 And the other thing is that the loss function
01:43:24 that is being used is only minimizing surprise.
01:43:27 So it’s predicting what comes next in the typical text.
01:43:29 It’s not trying to go for causal closure first
01:43:32 as we do.
01:43:33 Yeah.
01:43:35 But the fact that that kind of prediction works
01:43:39 to generate text that’s semantically rich
01:43:42 and consistent is interesting.
01:43:45 Yeah.
01:43:45 So yeah, so it’s amazing that it’s able
01:43:47 to generate semantically consistent text.
01:43:50 It’s not consistent.
01:43:51 So the problem is that it loses coherence at some point,
01:43:54 but it’s also, I think, not correct to say
01:43:57 that GPT3 is unable to deal with semantics at all
01:44:01 because you ask it to perform certain transformations
01:44:04 in text and it performs these transformation in text.
01:44:07 And the kind of additions that it’s able
01:44:09 to perform are transformations in text, right?
01:44:12 And there are proper semantics involved.
01:44:15 You can also do more.
01:44:16 There was a paper that was generating lots
01:44:19 and lots of mathematically correct text
01:44:24 and was feeding this into a transformer.
01:44:26 And as a result, it was able to learn
01:44:29 how to do differentiation integration in race
01:44:32 that according to the authors, Mathematica could not.
01:44:37 To which some of the people in Mathematica responded
01:44:39 that they were not using Mathematica in the right way
01:44:42 and so on.
01:44:43 I have not really followed the resolution of this conflict.
01:44:46 This part, as a small tangent,
01:44:48 I really don’t like in machine learning papers,
01:44:51 which they often do anecdotal evidence.
01:44:56 They’ll find like one example
01:44:58 in some kind of specific use of Mathematica
01:45:00 and demonstrate, look, here’s,
01:45:01 they’ll show successes and failures,
01:45:04 but they won’t have a very clear representation
01:45:07 of how many cases this actually represents.
01:45:09 Yes, but I think as a first paper,
01:45:11 this is a pretty good start.
01:45:12 And so the take home message, I think,
01:45:15 is that the authors could get better results
01:45:19 from this in their experiments
01:45:21 than they could get from the vein,
01:45:23 which they were using computer algebra systems,
01:45:25 which means that was not nothing.
01:45:29 And it’s able to perform substantially better
01:45:32 than GPT’s V can based on a much larger amount
01:45:35 of training data using the same underlying algorithm.
01:45:38 Well, let me ask, again,
01:45:41 so I’m using your tweets as if this is like Plato, right?
01:45:47 As if this is well thought out novels that you’ve written.
01:45:51 You tweeted, GPT4 is listening to us now.
01:45:58 This is one way of asking,
01:46:00 what are the limitations of GPT3 when it scales?
01:46:04 So what do you think will be the capabilities
01:46:06 of GPT4, GPT5, and so on?
01:46:10 What are the limits of this approach?
01:46:11 So obviously when we are writing things right now,
01:46:15 everything that we are writing now
01:46:16 is going to be training data
01:46:18 for the next generation of machine learning models.
01:46:20 So yes, of course, GPT4 is listening to us.
01:46:23 And I think the tweet is already a little bit older
01:46:25 and we now have Voodao
01:46:27 and we have a number of other systems
01:46:30 that basically are placeholders for GPT4.
01:46:33 Don’t know what open AIS plans are in this regard.
01:46:35 I read that tweet in several ways.
01:46:39 So one is obviously everything you put on the internet
01:46:42 is used as training data.
01:46:44 But in a second way I read it is in a,
01:46:49 we talked about agency.
01:46:51 I read it as almost like GPT4 is intelligent enough
01:46:55 to be choosing to listen.
01:46:58 So not only like did a programmer tell it
01:47:00 to collect this data and use it for training,
01:47:03 I almost saw the humorous angle,
01:47:06 which is like it has achieved AGI kind of thing.
01:47:09 Well, the thing is, could we be already be living in GPT5?
01:47:13 So GPT4 is listening and GPT5 actually constructing
01:47:18 the entirety of the reality where we…
01:47:20 Of course, in some sense,
01:47:22 what everybody is trying to do right now in AI
01:47:24 is to extend the transformer to be able to deal with video.
01:47:28 And there are very promising extensions, right?
01:47:31 There’s a work by Google that is called Perceiver
01:47:36 and that is overcoming some of the limitations
01:47:39 of the transformer by letting it learn the topology
01:47:41 of the different modalities separately.
01:47:44 And by training it to find better input features.
01:47:50 So basically feature abstractions that are being used
01:47:52 by this successor to GPT3 are chosen such a way
01:47:58 that it’s able to deal with video input.
01:48:00 And there is more to be done.
01:48:02 So one of the limitations of GPT3 is that it’s amnesiac.
01:48:07 So it forgets everything beyond the two pages
01:48:09 that it currently reads also during generation,
01:48:12 not just during learning.
01:48:14 Do you think that’s fixable
01:48:16 within the space of deep learning?
01:48:18 Can you just make a bigger, bigger, bigger input?
01:48:21 No, I don’t think that our own working memory
01:48:24 is infinitely large.
01:48:25 It’s probably also just a few thousand bits.
01:48:28 But what you can do is you can structure
01:48:31 this working memory.
01:48:31 So instead of just force feeding this thing,
01:48:34 a certain thing that it has to focus on,
01:48:37 and it’s not allowed to focus on anything else
01:48:39 as its network,
01:48:41 you allow it to construct its own working memory as we do.
01:48:44 When we are reading a book,
01:48:46 it’s not that we are focusing our attention
01:48:48 in such a way that we can only remember the current page.
01:48:52 We will also try to remember other pages
01:48:54 and try to undo what we learned from them
01:48:56 or modify what we learned from them.
01:48:58 We might get up and take another book from the shelf.
01:49:01 We might go out and ask somebody,
01:49:03 we can edit our working memory in any way that is useful
01:49:06 to put a context together that allows us
01:49:09 to draw the right inferences and to learn the right things.
01:49:13 So this ability to perform experiments on the world
01:49:16 based on an attempt to become fully coherent
01:49:20 and to achieve causal closure,
01:49:22 to achieve a certain aesthetic of your modeling,
01:49:24 that is something that eventually needs to be done.
01:49:28 And at the moment we are skirting this in some sense
01:49:31 by building systems that are larger and faster
01:49:33 so they can use dramatically larger resources
01:49:36 and human beings can do and much more training data
01:49:38 to get to models that in some sense
01:49:40 are already way superhuman
01:49:42 and in other ways are laughingly incoherent.
01:49:45 So do you think sort of making the systems like,
01:49:50 what would you say, multi resolutional?
01:49:51 So like some of the language models
01:49:56 are focused on two pages,
01:49:59 some are focused on two books,
01:50:03 some are focused on two years of reading,
01:50:06 some are focused on a lifetime,
01:50:08 so it’s like stacks of GPT3s all the way down.
01:50:11 You want to have gaps in between them.
01:50:13 So it’s not necessarily two years, there’s no gaps.
01:50:17 It’s things out of two years or out of 20 years
01:50:19 or 2,000 years or 2 billion years
01:50:22 where you are just selecting those bits
01:50:24 that are predicted to be the most useful ones
01:50:27 to understand what you’re currently doing.
01:50:29 And this prediction itself requires a very complicated model
01:50:32 and that’s the actual model that you need to be making.
01:50:34 It’s not just that you are trying to understand
01:50:36 the relationships between things,
01:50:38 but what you need to make relationships,
01:50:40 discover relationships over.
01:50:42 I wonder what that thing looks like,
01:50:45 what the architecture for the thing
01:50:47 that’s able to have that kind of model.
01:50:50 I think it needs more degrees of freedom
01:50:52 than the current models have.
01:50:54 So it starts out with the fact that you possibly
01:50:57 don’t just want to have a feed forward model,
01:50:59 but you want it to be fully recurrent.
01:51:02 And to make it fully recurrent,
01:51:04 you probably need to loop it back into itself
01:51:06 and allow it to skip connections.
01:51:08 Once you do this,
01:51:09 when you’re predicting the next frame
01:51:12 and your internal next frame in every moment,
01:51:15 and you are able to skip connection,
01:51:17 it means that signals can travel from the output
01:51:21 of the network into the middle of the network
01:51:24 faster than the inputs do.
01:51:25 Do you think it can still be differentiable?
01:51:28 Do you think it still can be a neural network?
01:51:30 Sometimes it can and sometimes it cannot.
01:51:32 So it can still be a neural network,
01:51:35 but not a fully differentiable one.
01:51:37 And when you want to deal with non differentiable ones,
01:51:40 you need to have an attention system
01:51:42 that is discreet and two dimensional
01:51:44 and can perform grammatical operations.
01:51:46 You need to be able to perform program synthesis.
01:51:49 You need to be able to backtrack
01:51:51 in this operations that you perform on this thing.
01:51:54 And this thing needs a model of what it’s currently doing.
01:51:56 And I think this is exactly the purpose
01:51:58 of our own consciousness.
01:52:01 Yeah, the program things are tricky on neural networks.
01:52:05 So let me ask you, it’s not quite program synthesis,
01:52:09 but the application of these language models
01:52:12 to generation, to program synthesis,
01:52:15 but generation of programs.
01:52:16 So if you look at GitHub OpenPilot,
01:52:19 which is based on OpenAI’s codecs,
01:52:21 I don’t know if you got a chance to look at it,
01:52:22 but it’s the system that’s able to generate code
01:52:26 once you prompt it with, what is it?
01:52:30 Like the header of a function with some comments.
01:52:32 And it seems to do an incredibly good job
01:52:36 or not a perfect job, which is very important,
01:52:39 but an incredibly good job of generating functions.
01:52:42 What do you make of that?
01:52:44 Are you, is this exciting
01:52:45 or is this just a party trick, a demo?
01:52:48 Or is this revolutionary?
01:52:51 I haven’t worked with it yet.
01:52:52 So it’s difficult for me to judge it,
01:52:55 but I would not be surprised
01:52:57 if it turns out to be a revolutionary.
01:52:59 And that’s because the majority of programming tasks
01:53:01 that are being done in the industry right now
01:53:04 are not creative.
01:53:05 People are writing code that other people have written,
01:53:08 or they’re putting things together from code fragments
01:53:10 that others have had.
01:53:11 And a lot of the work that programmers do in practice
01:53:14 is to figure out how to overcome the gaps
01:53:17 in their current knowledge
01:53:18 and the things that people have already done.
01:53:20 How to copy and paste from Stack Overflow, that’s right.
01:53:24 And so of course we can automate that.
01:53:26 Yeah, to make it much faster to copy and paste
01:53:29 from Stack Overflow.
01:53:30 Yes, but it’s not just copying and pasting.
01:53:32 It’s also basically learning which parts you need to modify
01:53:36 to make them fit together.
01:53:38 Yeah, like literally sometimes as simple
01:53:41 as just changing the variable names.
01:53:43 So it fits into the rest of your code.
01:53:45 Yes, but this requires that you understand the semantics
01:53:47 of what you’re doing to some degree.
01:53:49 And you can automate some of those things.
01:53:51 The thing that makes people nervous of course
01:53:53 is that a little bit wrong in a program
01:53:57 can have a dramatic effect on the actual final operation
01:54:02 of that program.
01:54:03 So that’s one little error,
01:54:05 which in the space of language doesn’t really matter,
01:54:08 but in the space of programs can matter a lot.
01:54:11 Yes, but this is already what is happening
01:54:14 when humans program code.
01:54:15 Yeah, this is.
01:54:16 So we have a technology to deal with this.
01:54:20 Somehow it becomes scarier when you know
01:54:23 that a program generated code
01:54:25 that’s running a nuclear power plant.
01:54:27 It becomes scarier.
01:54:29 You know, humans have errors too.
01:54:31 Exactly.
01:54:32 But it’s scarier when a program is doing it
01:54:35 because why, why?
01:54:38 I mean, there’s a fear that a program,
01:54:43 like a program may not be as good as humans
01:54:48 to know when stuff is important to not mess up.
01:54:51 Like there’s a misalignment of priorities of values
01:55:00 that’s potential.
01:55:01 Maybe that’s the source of the worry.
01:55:03 I mean, okay, if I give you code generated
01:55:06 by GitHub open pilot and code generated by a human
01:55:12 and say here, use one of these,
01:55:16 which how do you select today and in the next 10 years
01:55:20 which code do you use?
01:55:21 Wouldn’t you still be comfortable with the human?
01:55:25 At the moment when you go to Stanford to get an MRI,
01:55:29 they will write a bill to the insurance over $20,000.
01:55:34 And of this, maybe half of that gets paid by the insurance
01:55:38 and a quarter gets paid by you.
01:55:40 And the MRI cost them $600 to make maybe probably less.
01:55:44 And what are the values of the person
01:55:47 that writes the software and deploys this process?
01:55:51 It’s very difficult for me to say whether I trust people.
01:55:56 I think that what happens there is a mixture
01:55:58 of proper Anglo Saxon Protestant values
01:56:01 where somebody is trying to serve an abstract radar hole
01:56:04 and organize crime.
01:56:06 Well, that’s a very harsh,
01:56:10 I think that’s a harsh view of humanity.
01:56:15 There’s a lot of bad people, whether incompetent
01:56:18 or just malevolent in this world, yes.
01:56:21 But it feels like the more malevolent,
01:56:25 so the more damage you do to the world,
01:56:29 the more resistance you have in your own human heart.
01:56:34 Yeah, but don’t explain with malevolence or stupidity
01:56:37 what can be explained by just people
01:56:38 acting on their incentives.
01:56:41 Right, so what happens in Stanford
01:56:42 is not that somebody is evil.
01:56:45 It’s just that they do what they’re being paid for.
01:56:48 No, it’s not evil.
01:56:50 That’s, I tend to, no, I see that as malevolence.
01:56:53 I see as I, even like being a good German,
01:56:58 as I told you offline, is some,
01:57:01 it’s not absolute malevolence,
01:57:05 but it’s a small amount, it’s cowardice.
01:57:07 I mean, when you see there’s something wrong with the world,
01:57:10 it’s either incompetence and you’re not able to see it,
01:57:15 or it’s cowardice that you’re not able to stand up,
01:57:17 not necessarily in a big way, but in a small way.
01:57:21 So I do think that is a bit of malevolence.
01:57:25 I’m not sure the example you’re describing
01:57:27 is a good example of that.
01:57:28 So the question is, what is it that you are aiming for?
01:57:31 And if you don’t believe in the future,
01:57:34 if you, for instance, think that the dollar is going to crash,
01:57:37 why would you try to save dollars?
01:57:39 If you don’t think that humanity will be around
01:57:42 in a hundred years from now,
01:57:43 because global warming will wipe out civilization,
01:57:47 why would you need to act as if it were?
01:57:50 Right, so the question is,
01:57:51 is there an overarching aesthetics
01:57:53 that is projecting you and the world into the future,
01:57:56 which I think is the basic idea of religion,
01:57:59 that you understand the interactions
01:58:01 that we have with each other
01:58:02 as some kind of civilization level agent
01:58:04 that is projecting itself into the future.
01:58:07 If you don’t have that shared purpose,
01:58:10 what is there to be ethical for?
01:58:12 So I think when we talk about ethics and AI,
01:58:16 we need to go beyond the insane bias discussions and so on,
01:58:20 where people are just measuring the distance
01:58:22 between a statistic to their preferred current world model.
01:58:27 The optimism, wait, wait, wait,
01:58:29 I was a little confused by the previous thing,
01:58:31 just to clarify.
01:58:32 There is a kind of underlying morality
01:58:39 to having an optimism that human civilization
01:58:43 will persist for longer than a hundred years.
01:58:45 Like I think a lot of people believe
01:58:50 that it’s a good thing for us to keep living.
01:58:53 Yeah, of course.
01:58:54 And thriving.
01:58:54 This morality itself is not an end to itself.
01:58:56 It’s instrumental to people living in a hundred years
01:58:59 from now or 500 years from now.
01:59:03 So it’s only justifiable if you actually think
01:59:06 that it will lead to people or increase the probability
01:59:09 of people being around in that timeframe.
01:59:12 And a lot of people don’t actually believe that,
01:59:14 at least not actively.
01:59:16 But believe what exactly?
01:59:17 So I was…
01:59:19 Most people don’t believe
01:59:20 that they can afford to act on such a model.
01:59:23 Basically what happens in the US
01:59:25 is I think that the healthcare system
01:59:26 is for a lot of people no longer sustainable,
01:59:28 which means that if they need the help
01:59:30 of the healthcare system,
01:59:31 they’re often not able to afford it.
01:59:33 And when they cannot help it,
01:59:35 they are often going bankrupt.
01:59:37 I think the leading cause of personal bankruptcy
01:59:40 in the US is the healthcare system.
01:59:42 And that would not be necessary.
01:59:44 It’s not because people are consuming
01:59:46 more and more medical services
01:59:48 and are achieving a much, much longer life as a result.
01:59:51 That’s not actually the story that is happening
01:59:53 because you can compare it to other countries.
01:59:55 And life expectancy in the US is currently not increasing
01:59:58 and it’s not as high as in all the other
02:00:00 industrialized countries.
02:00:01 So some industrialized countries are doing better
02:00:03 with a much cheaper healthcare system.
02:00:06 And what you can see is for instance,
02:00:08 administrative bloat.
02:00:09 The healthcare system has maybe to some degree
02:00:13 deliberately set up as a job placement program
02:00:17 to allow people to continue living
02:00:19 in middle class existence,
02:00:20 despite not having useful use case in productivity.
02:00:25 So they are being paid to push paper around.
02:00:28 And the number of administrator in the healthcare system
02:00:31 has been increasing much faster
02:00:33 than the number of practitioners.
02:00:35 And this is something that you have to pay for.
02:00:37 And also the revenues that are being generated
02:00:40 in the healthcare system are relatively large
02:00:42 and somebody has to pay for them.
02:00:43 And the result why they are so large
02:00:45 is because market mechanisms are not working.
02:00:48 The FDA is largely not protecting people
02:00:51 from malpractice of healthcare providers.
02:00:55 The FDA is protecting healthcare providers
02:00:58 from competition.
02:00:59 Right, okay.
02:01:00 So this is a thing that has to do with values.
02:01:03 And this is not because people are malicious on all levels.
02:01:06 It’s because they are not incentivized
02:01:08 to act on a greater whole on this idea
02:01:11 that you treat somebody who comes to you as a patient,
02:01:14 like you would treat a family member.
02:01:15 Yeah, but we’re trying, I mean,
02:01:18 you’re highlighting a lot of the flaws
02:01:20 of the different institutions,
02:01:21 the systems we’re operating under,
02:01:23 but I think there’s a continued throughout history
02:01:25 mechanism design of trying to design incentives
02:01:29 in such a way that these systems behave
02:01:31 better and better and better.
02:01:32 I mean, it’s a very difficult thing
02:01:34 to operate a society of hundreds of millions of people
02:01:38 effectively with.
02:01:39 Yes, so do we live in a society that is ever correcting?
02:01:42 Is this, do we observe that our models
02:01:46 of what we are doing are predictive of the future
02:01:49 and when they are not, we improve them.
02:01:51 Are our laws adjudicated with clauses
02:01:54 that you put into every law,
02:01:56 what is meant to be achieved by that law
02:01:57 and the law will be automatically repealed
02:02:00 if it’s not achieving that, right?
02:02:01 If you are optimizing your own laws,
02:02:03 if you’re writing your own source code,
02:02:05 you probably make an estimate of what is this thing
02:02:08 that’s currently wrong in my life?
02:02:09 What is it that I should change about my own policies?
02:02:12 What is the expected outcome?
02:02:14 And if that outcome doesn’t manifest,
02:02:16 I will change the policy back, right?
02:02:18 Or I would change it to something different.
02:02:20 Are we doing this on a societal level?
02:02:22 I think so.
02:02:23 I think it’s easy to sort of highlight the,
02:02:25 I think we’re doing it in the way that,
02:02:29 like I operate my current life.
02:02:30 I didn’t sleep much last night.
02:02:32 You would say that Lex,
02:02:34 the way you need to operate your life
02:02:35 is you need to always get sleep.
02:02:37 The fact that you didn’t sleep last night
02:02:39 is totally the wrong way to operate in your life.
02:02:43 Like you should have gotten all your shit done in time
02:02:46 and gotten to sleep because sleep is very important
02:02:48 for health and you’re highlighting,
02:02:50 look, this person is not sleeping.
02:02:52 Look, the medical, the healthcare system is operating poor.
02:02:56 But the point is we just,
02:02:59 it seems like this is the way,
02:03:00 especially in the capitalist society, we operate.
02:03:02 We keep running into trouble and last minute,
02:03:05 we try to get our way out through innovation
02:03:09 and it seems to work.
02:03:10 You have a lot of people that ultimately are trying
02:03:13 to build a better world and get urgency about them
02:03:18 when the problem becomes more and more imminent.
02:03:22 And that’s the way this operates.
02:03:24 But if you look at the long arc of history,
02:03:29 it seems like that operating on deadlines
02:03:34 produces progress and builds better and better systems.
02:03:36 You probably agree with me that the US
02:03:39 should have engaged in mask production in January 2020
02:03:44 and that we should have shut down the airports early on
02:03:47 and that we should have made it mandatory
02:03:50 that the people that work in nursing homes
02:03:53 are living on campus rather than living at home
02:03:57 and then coming in and infecting people in the nursing homes
02:04:01 that had no immune response to COVID.
02:04:03 And that is something that was, I think, visible back then.
02:04:08 The correct decisions haven’t been made.
02:04:10 We would have the same situation again.
02:04:12 How do we know that these wrong decisions
02:04:14 are not being made again?
02:04:15 Have the people that made the decisions
02:04:17 to not protect the nursing homes been punished?
02:04:20 Have the people that made the wrong decisions
02:04:23 with respect to testing that prevented the development
02:04:26 of testing by startup companies and the importing
02:04:29 of tests from countries that already had them,
02:04:32 have these people been held responsible?
02:04:34 First of all, so what do you wanna put
02:04:37 before the firing squad?
02:04:38 I think they are being held responsible.
02:04:39 No, just make sure that this doesn’t happen again.
02:04:41 No, but it’s not that, yes, they’re being held responsible
02:04:46 by many voices, by people being frustrated.
02:04:48 There’s new leaders being born now
02:04:50 that we’re going to see rise to the top in 10 years.
02:04:54 This moves slower than, there’s obviously
02:04:57 a lot of older incompetence and bureaucracy
02:05:01 and these systems move slowly.
02:05:03 They move like science, one death at a time.
02:05:06 So yes, I think the pain that’s been felt
02:05:11 in the previous year is reverberating throughout the world.
02:05:15 Maybe I’m getting old, I suspect that every generation
02:05:18 in the US after the war has lost the plot even more.
02:05:21 I don’t see this development.
02:05:23 The war, World War II?
02:05:24 Yes, so basically there was a time when we were modernist
02:05:29 and in this modernist time, the US felt actively threatened
02:05:33 by the things that happened in the world.
02:05:35 The US was worried about possibility of failure
02:05:39 and this imminence of possible failure led to decisions.
02:05:44 There was a time when the government would listen
02:05:47 to physicists about how to do things
02:05:50 and the physicists were actually concerned
02:05:52 about what the government should be doing.
02:05:53 So they would be writing letters to the government
02:05:56 and so for instance, the decision for the Manhattan Project
02:05:58 was something that was driven in a conversation
02:06:01 between physicists and the government.
02:06:04 I don’t think such a discussion would take place today.
02:06:06 I disagree, I think if the virus was much deadlier,
02:06:10 we would see a very different response.
02:06:12 I think the virus was not sufficiently deadly
02:06:14 and instead because it wasn’t very deadly,
02:06:17 what happened is the current system
02:06:20 started to politicize it.
02:06:21 The mask, this is what I realized with masks early on,
02:06:25 they were not, very quickly became not as a solution
02:06:29 but they became a thing that politicians used
02:06:32 to divide the country.
02:06:33 So the same things happened with vaccines, same thing.
02:06:36 So like nobody’s really,
02:06:38 people weren’t talking about solutions to this problem
02:06:41 because I don’t think the problem was bad enough.
02:06:43 When you talk about the war,
02:06:45 I think our lives are too comfortable.
02:06:48 I think in the developed world, things are too good
02:06:52 and we have not faced severe dangers.
02:06:54 When the danger, the severe dangers,
02:06:57 existential threats are faced, that’s when we step up
02:07:00 on a small scale and a large scale.
02:07:02 Now, I don’t, that’s sort of my argument here
02:07:07 but I did think the virus is, I was hoping
02:07:11 that it was actually sufficiently dangerous
02:07:16 for us to step up because especially in the early days,
02:07:18 it was unclear, it still is unclear because of mutations,
02:07:23 how bad it might be, right?
02:07:25 And so I thought we would step up and even,
02:07:30 so the masks point is a tricky one because to me,
02:07:35 the manufacture of masks isn’t even the problem.
02:07:38 I’m still to this day and I was involved
02:07:41 with a bunch of this work, have not seen good science done
02:07:44 on whether masks work or not.
02:07:46 Like there still has not been a large scale study.
02:07:49 To me, that should be, there should be large scale studies
02:07:51 and every possible solution, like aggressive
02:07:55 in the same way that the vaccine development
02:07:56 was aggressive.
02:07:57 There should be masks, which tests,
02:07:59 what kind of tests work really well, what kind of,
02:08:03 like even the question of how the virus spreads.
02:08:06 There should be aggressive studies on that to understand.
02:08:09 I’m still, as far as I know, there’s still a lot
02:08:12 of uncertainty about that.
02:08:14 Nobody wants to see this as an engineering problem
02:08:17 that needs to be solved.
02:08:18 It’s that I was surprised about, but I wouldn’t.
02:08:21 So I find that our views are largely convergent
02:08:24 but not completely.
02:08:25 So I agree with the thing that because our society
02:08:29 in some sense perceives itself as too big to fail.
02:08:32 Right.
02:08:33 The virus did not alert people to the fact
02:08:35 that we are facing possible failure
02:08:38 that basically put us into the postmodernist mode.
02:08:41 And I don’t mean in a philosophical sense
02:08:43 but in a societal sense.
02:08:45 The difference between the postmodern society
02:08:47 and the modern society is that the modernist society
02:08:50 has to deal with the ground truth
02:08:52 and the postmodernist society has to deal with appearances.
02:08:55 Politics becomes a performance
02:08:57 and the performance is done for an audience
02:08:59 and the organized audience is the media.
02:09:02 And the media evaluates itself via other media, right?
02:09:05 So you have an audience of critics that evaluate themselves.
02:09:09 And I don’t think it’s so much the failure
02:09:10 of the politicians because to get in power
02:09:12 and to stay in power, you need to be able
02:09:15 to deal with the published opinion.
02:09:17 Well, I think it goes in cycles
02:09:19 because what’s going to happen is all
02:09:22 of the small business owners, all the people
02:09:24 who truly are suffering and will suffer more
02:09:27 because the effects of the closure of the economy
02:09:31 and the lack of solutions to the virus,
02:09:34 they’re going to apprise.
02:09:36 And hopefully, I mean, this is where charismatic leaders
02:09:40 can get the world in trouble
02:09:42 but hopefully will elect great leaders
02:09:47 that will break through this postmodernist idea
02:09:51 of the media and the perception
02:09:55 and the drama on Twitter and all that kind of stuff.
02:09:57 But you know, this can go either way.
02:09:59 Yeah.
02:10:00 When the Weimar Republic was unable to deal
02:10:03 with the economic crisis that Germany was facing,
02:10:07 there was an option to go back.
02:10:10 But there were people which thought,
02:10:11 let’s get back to a constitutional monarchy
02:10:14 and let’s get this to work because democracy doesn’t work.
02:10:18 And eventually, there was no way back.
02:10:21 People decided there was no way back.
02:10:23 They needed to go forward.
02:10:24 And the only options for going forward
02:10:26 was to become Stalinist communist,
02:10:29 basically an option to completely expropriate
02:10:34 the factories and so on and nationalize them
02:10:36 and to reorganize Germany in communist terms
02:10:40 and ally itself with Stalin and fascism.
02:10:44 And both options were obviously very bad.
02:10:47 And the one that the Germans picked
02:10:49 led to a catastrophe that devastated Europe.
02:10:54 And I’m not sure if the US has an immune response
02:10:57 against that.
02:10:58 I think that the far right is currently very weak in the US,
02:11:01 but this can easily change.
02:11:05 Do you think from a historical perspective,
02:11:08 Hitler could have been stopped
02:11:10 from within Germany or from outside?
02:11:14 Or this, well, depends on who you wanna focus,
02:11:17 whether you wanna focus on Stalin or Hitler,
02:11:20 but it feels like Hitler was the one
02:11:22 as a political movement that could have been stopped.
02:11:25 I think that the point was that a lot of people
02:11:28 wanted Hitler, so he got support from a lot of quarters.
02:11:32 There was a number of industrialists who supported him
02:11:35 because they thought that the democracy
02:11:36 is obviously not working and unstable
02:11:38 and you need a strong man.
02:11:40 And he was willing to play that part.
02:11:43 There were also people in the US who thought
02:11:45 that Hitler would stop Stalin
02:11:47 and would act as a bulwark against Bolshevism,
02:11:52 which he probably would have done, right?
02:11:54 But at which cost?
02:11:56 And then many of the things that he was going to do,
02:11:59 like the Holocaust, was something where people thought
02:12:03 this is rhetoric, he’s not actually going to do this.
02:12:07 Especially many of the Jews themselves, which were humanists.
02:12:10 And for them, this was outside of the scope
02:12:12 that was thinkable.
02:12:13 Right.
02:12:14 I mean, I wonder if Hitler is uniquely,
02:12:20 I wanna carefully use this term, but uniquely evil.
02:12:23 So if Hitler was never born,
02:12:26 if somebody else would come in this place.
02:12:29 So like, just thinking about the progress of history,
02:12:33 how important are those singular figures
02:12:36 that lead to mass destruction and cruelty?
02:12:40 Because my sense is Hitler was unique.
02:12:47 It wasn’t just about the environment
02:12:49 and the context that gave him,
02:12:51 like another person would not come in his place
02:12:54 to do as destructive of the things that he did.
02:12:58 There was a combination of charisma, of madness,
02:13:02 of psychopathy, of just ego, all those things,
02:13:07 which are very unlikely to come together
02:13:09 in one person in the right time.
02:13:12 It also depends on the context of the country
02:13:14 that you’re operating in.
02:13:16 If you tell the Germans that they have a historical destiny
02:13:22 in this romantic country,
02:13:23 the effect is probably different
02:13:25 than it is in other countries.
02:13:27 But Stalin has killed a few more people than Hitler did.
02:13:33 And if you look at the probability
02:13:35 that you survived under Stalin,
02:13:39 Hitler killed people if he thought
02:13:43 they were not worth living,
02:13:45 or if they were harmful to his racist project.
02:13:49 He basically felt that the Jews would be too cosmopolitan
02:13:52 and would not be willing to participate
02:13:55 in the racist redefinition of society
02:13:57 and the value of society,
02:13:58 and there is no state in this way
02:14:01 that he wanted to have it.
02:14:03 So he saw them as harmful danger,
02:14:06 especially since they played such an important role
02:14:09 in the economy and culture of Germany.
02:14:13 And so basically he had some radical
02:14:18 but rational reason to murder them.
02:14:20 And Stalin just killed everyone.
02:14:23 Basically the Stalinist purges were such a random thing
02:14:26 where he said that there’s a certain possibility
02:14:31 that this particular part of the population
02:14:34 has a number of German collaborators or something,
02:14:36 and we just kill them all, right?
02:14:38 Or if you look at what Mao did,
02:14:40 the number of people that were killed
02:14:42 in absolute numbers were much higher under Mao
02:14:45 than they were under Stalin.
02:14:47 So it’s super hard to say.
02:14:49 The other thing is that you look at Genghis Khan and so on,
02:14:53 how many people he killed.
02:14:56 When you see there are a number of things
02:14:58 that happen in human history
02:14:59 that actually really put a substantial dent
02:15:02 in the existing population, or Napoleon.
02:15:05 And it’s very difficult to eventually measure it
02:15:09 because what’s happening is basically evolution
02:15:12 on a human scale where one monkey figures out
02:15:17 a way to become viral and is using this viral technology
02:15:22 to change the patterns of society
02:15:24 at the very, very large scale.
02:15:26 And what we find so abhorrent about these changes
02:15:29 is the complexity that is being destroyed by this.
02:15:32 That’s basically like a big fire that burns out
02:15:34 a lot of the existing culture and structure
02:15:36 that existed before.
02:15:38 Yeah, and it all just starts with one monkey.
02:15:42 One charismatic ape.
02:15:44 And there’s a bunch of them throughout history.
02:15:46 Yeah, but it’s in a given environment.
02:15:47 It’s basically similar to wildfires in California, right?
02:15:51 The temperature is rising.
02:15:53 There is less rain falling.
02:15:55 And then suddenly a single spark can have an effect
02:15:57 that in other times would be contained.
02:16:00 Okay, speaking of which, I love how we went
02:16:04 to Hitler and Stalin from 20, 30 minutes ago,
02:16:09 GPT3 generating, doing programs that this is.
02:16:13 The argument was about morality of AI versus human.
02:16:23 And specifically in the context of writing programs,
02:16:26 specifically in the context of programs
02:16:28 that can be destructive.
02:16:29 So running nuclear power plants
02:16:31 or autonomous weapons systems, for example.
02:16:35 And I think your inclination was to say that
02:16:39 it’s not so obvious that AI would be less moral than humans
02:16:43 or less effective at making a world
02:16:46 that would make humans happy.
02:16:48 So I’m not talking about self directed systems
02:16:52 that are making their own goals at a global scale.
02:16:57 If you just talk about the deployment
02:16:59 of technological systems that are able to see order
02:17:03 and patterns and use this as control models
02:17:05 to act on the goals that we give them,
02:17:08 then if we have the correct incentives
02:17:11 to set the correct incentives for these systems,
02:17:13 I’m quite optimistic.
02:17:16 So humans versus AI, let me give you an example.
02:17:20 Autonomous weapon system.
02:17:23 Let’s say there’s a city somewhere in the Middle East
02:17:26 that has a number of terrorists.
02:17:30 And the question is,
02:17:32 what’s currently done with drone technologies,
02:17:35 you have information about the location
02:17:37 of a particular terrorist and you have a targeted attack,
02:17:40 you have a bombing of that particular building.
02:17:43 And that’s all directed by humans
02:17:45 at the high level strategy
02:17:47 and also at the deployment of individual bombs and missiles
02:17:50 like the actual, everything is done by human
02:17:53 except the final targeting.
02:17:56 And it’s like spot, similar thing, like control the flight.
02:18:01 Okay, what if you give AI control and saying,
02:18:07 write a program that says,
02:18:10 here’s the best information I have available
02:18:12 about the location of these five terrorists,
02:18:14 here’s the city, make sure all the bombing you do
02:18:17 is constrained to the city, make sure it’s precision based,
02:18:21 but you take care of it.
02:18:22 So you do one level of abstraction out
02:18:25 and saying, take care of the terrorists in the city.
02:18:29 Which are you more comfortable with,
02:18:31 the humans or the JavaScript GPT3 generated code
02:18:35 that’s doing the deployment?
02:18:38 I mean, this is the kind of question I’m asking,
02:18:42 is the kind of bugs that we see in human nature,
02:18:47 are they better or worse than the kind of bugs we see in AI?
02:18:51 There are different bugs.
02:18:52 There is an issue that if people are creating
02:18:55 an imperfect automation of a process
02:18:59 that normally requires a moral judgment,
02:19:02 and this moral judgment is the reason
02:19:05 why it cannot be automated often,
02:19:07 it’s not because the computation is too expensive,
02:19:12 but because the model that you give the AI
02:19:14 is not an adequate model of the dynamics of the world,
02:19:17 because the AI does not understand the context
02:19:19 that it’s operating in the right way.
02:19:21 And this is something that already happens with Excel.
02:19:24 You don’t need to have an AI system to do this.
02:19:27 You have an automated process in place
02:19:30 where humans decide using automated criteria
02:19:33 whom to kill when and whom to target when,
02:19:36 which already happens.
02:19:38 And you have no way to get off the kill list
02:19:40 once that happens, once you have been targeted
02:19:42 according to some automatic criterion
02:19:44 by people in a bureaucracy, that is the issue.
02:19:48 The issue is not the AI, it’s the automation.
02:19:52 So there’s something about, right, it’s automation,
02:19:56 but there’s something about the,
02:19:58 there’s a certain level of abstraction
02:20:00 where you give control to AI to do the automation.
02:20:04 There’s a scale that can be achieved
02:20:07 that it feels like the scale of bug and scale mistake
02:20:10 and scale of destruction that can be achieved
02:20:14 of the kind that humans cannot achieve.
02:20:16 So AI is much more able to destroy
02:20:19 an entire country accidentally versus humans.
02:20:22 It feels like the more civilians die as they react
02:20:27 or suffer as the consequences of your decisions,
02:20:30 the more weight there is on the human mind
02:20:34 to make that decision.
02:20:36 And so like, it becomes more and more unlikely
02:20:39 to make that decision for humans.
02:20:41 For AI, it feels like it’s harder to encode
02:20:44 that kind of weight.
02:20:47 In a way, the AI that we’re currently building
02:20:49 is automating statistics, right?
02:20:51 Intelligence is the ability to make models
02:20:53 so you can act on them,
02:20:55 and AI is the tool to make better models.
02:20:58 So in principle, if you’re using AI wisely,
02:21:01 you’re able to prevent more harm.
02:21:04 And I think that the main issue is not on the side of the AI,
02:21:07 it’s on the side of the human command hierarchy
02:21:09 that is using technology irresponsibly.
02:21:12 So the question is how hard is it to encode,
02:21:15 to properly encode the right incentives into the AI?
02:21:19 So for instance, there’s this idea
02:21:21 of what happens if we let our airplanes being flown
02:21:24 with AI systems and the neural network is a black box
02:21:27 and so on.
02:21:28 And it turns out our neural networks
02:21:30 are actually not black boxes anymore.
02:21:32 There are function approximators using linear algebra,
02:21:36 and there are performing things that we can understand.
02:21:40 But we can also, instead of letting the neural network
02:21:42 fly the airplane, use the neural network
02:21:44 to generate a provably correct program.
02:21:47 There’s a degree of accuracy of the proof
02:21:49 that a human could not achieve.
02:21:51 And so we can use our AI by combining
02:21:54 different technologies to build systems
02:21:56 that are much more reliable than the systems
02:21:58 that a human being could create.
02:22:00 And so in this sense, I would say that
02:22:03 if you use an early stage of technology to save labor
02:22:08 and don’t employ competent people,
02:22:11 but just to hack something together because you can,
02:22:14 that is very dangerous.
02:22:15 And if people are acting under these incentives
02:22:17 that they get away with delivering shoddy work
02:22:20 more cheaply using AI with less human oversight than before,
02:22:23 that’s very dangerous.
02:22:25 The thing is though, AI is still going to be unreliable,
02:22:28 perhaps less so than humans,
02:22:30 but it’ll be unreliable in novel ways.
02:22:33 And…
02:22:35 Yeah, but this is an empirical question.
02:22:37 And it’s something that we can figure out and work with.
02:22:39 So the issue is, do we trust the systems,
02:22:43 the social systems that we have in place
02:22:45 and the social systems that we can build and maintain
02:22:48 that they’re able to use AI responsibly?
02:22:50 If they can, then AI is good news.
02:22:52 If they cannot,
02:22:54 then it’s going to make the existing problems worse.
02:22:57 Well, and also who creates the AI, who controls it,
02:23:00 who makes money from it because it’s ultimately humans.
02:23:03 And then you start talking about
02:23:05 how much you trust the humans.
02:23:06 So the question is, what does who mean?
02:23:08 I don’t think that we have identity per se.
02:23:11 I think that the story of a human being is somewhat random.
02:23:15 What happens is more or less that everybody is acting
02:23:18 on their local incentives,
02:23:19 what they perceive to be their incentives.
02:23:21 And the question is, what are the incentives
02:23:24 that the one that is pressing the button is operating under?
02:23:28 Yeah.
02:23:30 It’s nice for those incentives to be transparent.
02:23:32 So, for example, I’ll give you an example.
02:23:36 There seems to be a significant distrust
02:23:38 of a tech, like entrepreneurs in the tech space
02:23:44 or people that run, for example, social media companies
02:23:47 like Mark Zuckerberg.
02:23:49 There’s not a complete transparency of incentives
02:23:53 under which that particular human being operates.
02:23:58 We can listen to the words he says
02:24:00 or what the marketing team says for a company,
02:24:02 but we don’t know.
02:24:04 And that becomes a problem when the algorithms
02:24:08 and the systems created by him and other people
02:24:12 in that company start having more and more impact
02:24:15 on society.
02:24:17 And that it starts, if the incentives were somehow
02:24:21 the definition and the explainability of the incentives
02:24:26 was decentralized such that nobody can manipulate it,
02:24:30 no propaganda type manipulation of like
02:24:35 how these systems actually operate could be done,
02:24:38 then yes, I think AI could achieve much fairer,
02:24:45 much more effective sort of like solutions
02:24:50 to difficult ethical problems.
02:24:53 But when there’s like humans in the loop,
02:24:55 manipulating the dissemination, the communication
02:25:00 of how the system actually works,
02:25:02 that feels like you can run into a lot of trouble.
02:25:05 And that’s why there’s currently a lot of distrust
02:25:07 for people at the heads of companies
02:25:10 that have increasingly powerful AI systems.
02:25:13 I suspect what happened traditionally in the US
02:25:16 was that since our decision making
02:25:18 is much more decentralized than in an authoritarian state,
02:25:22 people are making decisions autonomously
02:25:24 at many, many levels in a society.
02:25:26 What happened that was we created coherence
02:25:30 and cohesion in society by controlling what people thought
02:25:33 and what information they had.
02:25:35 The media synchronized public opinion
02:25:38 and social media have disrupted this.
02:25:40 It’s not, I think so much Russian influence or something,
02:25:43 it’s everybody’s influence.
02:25:45 It’s that a random person can come up
02:25:47 with a conspiracy theory and disrupt what people think.
02:25:52 And if that conspiracy theory is more compelling
02:25:55 or more attractive than the standardized
02:25:58 public conspiracy theory that we give people as a default,
02:26:01 then it might get more traction, right?
02:26:03 You suddenly have the situation that a single individual
02:26:05 somewhere on a farm in Texas has more listeners than CNN.
02:26:11 Which particular farmer are you referring to in Texas?
02:26:17 Probably no.
02:26:19 Yes, I had dinner with him a couple of times, okay.
02:26:21 Right, it’s an interesting situation
02:26:23 because you cannot get to be an anchor in CNN
02:26:25 if you don’t go through a complicated gatekeeping process.
02:26:30 And suddenly you have random people
02:26:32 without that gatekeeping process,
02:26:34 just optimizing for attention.
02:26:36 Not necessarily with a lot of responsibility
02:26:39 for the longterm effects of projecting these theories
02:26:42 into the public.
02:26:43 And now there is a push of making social media
02:26:46 more like traditional media,
02:26:48 which means that the opinion that is being projected
02:26:51 in social media is more limited to an acceptable range.
02:26:54 With the goal of getting society into safe waters
02:26:58 and increase the stability and cohesion of society again,
02:27:00 which I think is a laudable goal.
02:27:03 But of course it also is an opportunity
02:27:05 to seize the means of indoctrination.
02:27:08 And the incentives that people are under when they do this
02:27:11 are in such a way that the AI ethics that we would need
02:27:17 becomes very often something like AI politics,
02:27:20 which is basically partisan and ideological.
02:27:23 And this means that whatever one side says,
02:27:26 another side is going to be disagreeing with, right?
02:27:28 In the same way as when you turn masks or the vaccine
02:27:31 into a political issue,
02:27:33 if you say that it is politically virtuous
02:27:35 to get vaccinated,
02:27:36 it will mean that the people that don’t like you
02:27:39 will not want to get vaccinated, right?
02:27:41 And as soon as you have this partisan discourse,
02:27:43 it’s going to be very hard to make the right decisions
02:27:47 because the incentives get to be the wrong ones.
02:27:48 AI ethics needs to be super boring.
02:27:51 It needs to be done by people who do statistics
02:27:53 all the time and have extremely boring,
02:27:56 long winded discussions that most people cannot follow
02:27:59 because they are too complicated,
02:28:00 but that are dead serious.
02:28:02 These people need to be able to be better at statistics
02:28:05 than the leading machine learning researchers.
02:28:07 And at the moment, the AI ethics debate is the one
02:28:12 where you don’t have any barrier to entry, right?
02:28:14 Everybody who has a strong opinion
02:28:16 and is able to signal that opinion in the right way
02:28:18 can enter it.
02:28:19 And to me, that is a very frustrating thing
02:28:24 because the field is so crucially important
02:28:26 to our future.
02:28:27 It’s so crucially important,
02:28:28 but the only qualification you currently need
02:28:31 is to be outraged by the injustice in the world.
02:28:34 It’s more complicated, right?
02:28:36 Everybody seems to be outraged.
02:28:37 But let’s just say that the incentives
02:28:40 are not always the right ones.
02:28:42 So basically, I suspect that a lot of people
02:28:45 that enter this debate don’t have a vision
02:28:48 for what society should be looking like
02:28:50 in a way that is nonviolent,
02:28:51 where we preserve liberal democracy,
02:28:53 where we make sure that we all get along
02:28:56 and we are around in a few hundred years from now,
02:29:00 preferably with a comfortable
02:29:02 technological civilization around us.
02:29:04 I generally have a very foggy view of that world,
02:29:10 but I tend to try to follow,
02:29:12 and I think society should in some degree
02:29:13 follow the gradient of love,
02:29:16 increasing the amount of love in the world.
02:29:18 And whenever I see different policies
02:29:21 or algorithms or ideas that are not doing so,
02:29:24 obviously, that’s the ones that kind of resist.
02:29:27 So the thing that terrifies me about this notion
02:29:30 is I think that German fascism was driven by love.
02:29:35 It was just a very selective love.
02:29:37 It was a love that basically…
02:29:39 Now you’re just manipulating.
02:29:40 I mean, that’s, you have to be very careful.
02:29:45 You’re talking to the wrong person in this way about love.
02:29:50 So let’s talk about what love is.
02:29:52 And I think that love is the discovery of shared purpose.
02:29:55 It’s the recognition of the sacred in the other.
02:29:59 And this enables non transactional interactions.
02:30:02 But the size of the other that you include
02:30:07 needs to be maximized.
02:30:09 So it’s basically appreciation,
02:30:14 like deep appreciation of the world around you fully,
02:30:23 including the people that are very different than you,
02:30:25 people that disagree with you completely,
02:30:27 including people, including living creatures
02:30:30 outside of just people, including ideas.
02:30:33 And it’s like appreciation of the full mess of it.
02:30:36 And also it has to do with like empathy,
02:30:40 which is coupled with a lack of confidence
02:30:44 and certainty of your own rightness.
02:30:47 It’s like a radical open mindedness to the way forward.
02:30:51 I agree with every part of what you said.
02:30:53 And now if you scale it up,
02:30:54 what you recognize is that Lafist is in some sense,
02:30:58 the service to next level agency,
02:31:01 to the highest level agency that you can recognize.
02:31:04 It could be for instance, life on earth or beyond that,
02:31:07 where you could say intelligent complexity in the universe
02:31:11 that you try to maximize in a certain way.
02:31:14 But when you think it’s true,
02:31:15 it basically means a certain aesthetic.
02:31:18 And there is not one possible aesthetic,
02:31:20 there are many possible aesthetics.
02:31:22 And once you project an aesthetic into the future,
02:31:25 you can see that there are some which defect from it,
02:31:29 which are in conflict with it,
02:31:30 that are corrupt, that are evil.
02:31:33 You and me would probably agree that Hitler was evil
02:31:37 because the aesthetic of the world that he wanted
02:31:39 is in conflict with the aesthetic of the world
02:31:41 that you and me have in mind.
02:31:44 And so they think that he destroyed,
02:31:48 we want to keep them in the world.
02:31:50 There’s a kind of, there’s kind of ways to deal,
02:31:55 I mean, Hitler is an easier case,
02:31:56 but perhaps he wasn’t so easy in the 30s, right?
02:31:59 To understand who is Hitler and who is not.
02:32:02 No, it was just there was no consensus
02:32:04 that the aesthetics that he had in mind were unacceptable.
02:32:07 Yeah, I mean, it’s difficult, love is complicated
02:32:12 because you can’t just be so open minded
02:32:17 that you let evil walk into the door,
02:32:20 but you can’t be so self assured
02:32:24 that you can always identify evil perfectly
02:32:29 because that’s what leads to Nazi Germany.
02:32:32 Having a certainty of what is and wasn’t evil,
02:32:34 like always drawing lines of good versus evil.
02:32:38 There seems to be, there has to be a dance
02:32:42 between like hard stances extending up
02:32:49 against what is wrong.
02:32:51 And at the same time, empathy and open mindedness
02:32:55 of towards not knowing what is right and wrong
02:32:59 and like a dance between those.
02:33:01 I found that when I watched the Miyazaki movies
02:33:03 that there is nobody who captures my spirituality
02:33:06 as well as he does.
02:33:07 It’s very interesting and just vicious, right?
02:33:10 There is something going on in his movies
02:33:13 that is very interesting.
02:33:14 So for instance, Mononoke is discussing
02:33:17 not only an answer to Disney’s simplistic notion of Mowgli,
02:33:22 the jungle boy was raised by wolves.
02:33:24 And as soon as he sees people realizes that he’s one of them
02:33:27 and the way in which the moral life and nature
02:33:32 is simplified and romanticized and turned into kitsch.
02:33:36 It’s disgusting in the Disney movie.
02:33:37 And he answers to this, you see,
02:33:39 he’s replaced by Mononoke, this wolf girl
02:33:42 who was raised by wolves and was fierce and dangerous
02:33:44 and who cannot be socialized because she cannot be tamed.
02:33:48 You cannot be part of human society.
02:33:50 And you see human society,
02:33:51 it’s something that is very, very complicated.
02:33:53 You see people extracting resources and destroying nature.
02:33:57 But the purpose is not to be evil,
02:34:00 but to be able to have a life that is free from,
02:34:04 for instance, oppression and violence
02:34:07 and to curb death and disease.
02:34:10 And you basically see this conflict
02:34:13 which cannot be resolved in a certain way.
02:34:15 You see this moment when nature is turned into a garden
02:34:18 and it loses most of what it actually is
02:34:20 and humans no longer submitting to life and death
02:34:23 and nature and to these questions, there is no easy answer.
02:34:26 So it just turns it into something that is being observed
02:34:29 as a journey that happens.
02:34:31 And that happens with a certain degree of inevitability.
02:34:34 And the nice thing about all his movies
02:34:37 is there’s a certain main character
02:34:38 and it’s the same in all movies.
02:34:41 It’s this little girl that is basically Heidi.
02:34:45 And I suspect that happened because when he did field work
02:34:50 for working on the Heidi movies back then,
02:34:53 the Heidi animations, before he did his own movies,
02:34:55 he traveled to Switzerland and South Eastern Europe
02:35:00 and the Adriatic and so on and got an idea
02:35:03 about a certain aesthetic and a certain way of life
02:35:05 that informed his future thinking.
02:35:08 And Heidi has a very interesting relationship
02:35:11 to herself and to the world.
02:35:13 There’s nothing that she takes for herself.
02:35:15 She’s in a way fearless because she is committed
02:35:18 to a service, to a greater whole.
02:35:20 Basically, she is completely committed to serving God.
02:35:24 And it’s not an institutionalized God.
02:35:26 It has nothing to do with the Roman Catholic Church
02:35:28 or something like this.
02:35:30 But in some sense, Heidi is an embodiment
02:35:32 of the spirit of European Protestantism.
02:35:35 It’s this idea of a being that is completely perfect
02:35:38 and pure.
02:35:40 And it’s not a feminist vision
02:35:42 because she is not a girl boss or something like this.
02:35:48 She is the justification for the men in the audience
02:35:52 to protect her, to build a civilization around her
02:35:54 that makes her possible.
02:35:56 So she is not just the sacrifice of Jesus
02:35:59 who is innocent and therefore nailed to the cross.
02:36:02 She is not being sacrificed.
02:36:04 She is being protected by everybody around her
02:36:07 who recognizes that she is sacred.
02:36:08 And there are enough around her to see that.
02:36:12 So this is a very interesting perspective.
02:36:14 There’s a certain notion of innocence.
02:36:16 And this notion of innocence is not universal.
02:36:18 It’s not in all cultures.
02:36:20 Hitler wasn’t innocent.
02:36:21 His idea of Germany was not that there is an innocence
02:36:25 that is being protected.
02:36:26 There was a predator that was going to triumph.
02:36:29 And it’s also something that is not at the core
02:36:31 of every religion.
02:36:32 There are many religions which don’t care about innocence.
02:36:34 They might care about increasing the status of something.
02:36:41 And that’s a very interesting notion that is quite unique
02:36:44 and not claiming it’s the optimal one.
02:36:47 It’s just a particular kind of aesthetic
02:36:49 which I think makes Miyazaki
02:36:51 into the most relevant Protestant philosopher today.
02:36:55 And you’re saying in terms of all the ways
02:36:59 that a society can operate perhaps the preservation
02:37:02 of innocence might be one of the best.
02:37:07 No, it’s just my aesthetic.
02:37:09 So it’s a particular way in which I feel
02:37:13 that I relate to the world that is natural
02:37:15 to my own socialization.
02:37:16 And maybe it’s not an accident
02:37:18 that I have cultural roots in Europe
02:37:22 in a particular world.
02:37:23 And so maybe it’s a natural convergence point
02:37:26 and it’s not something that you will find
02:37:28 in all other times in history.
02:37:30 So I’d like to ask you about Solzhenitsyn
02:37:33 and our individual role as ants in this very large society.
02:37:39 So he says that some version of the line
02:37:42 between good and evil runs to the heart of every man.
02:37:44 Do you think all of us are capable of good and evil?
02:37:47 Like what’s our role in this play
02:37:53 in this game we’re all playing?
02:37:55 Is all of us capable to play any role?
02:37:59 Like, is there an ultimate responsibility
02:38:00 to you mentioned maintaining innocence
02:38:04 or whatever the highest ideal for a society you want
02:38:09 are all of us capable of living up to that?
02:38:11 And that’s our responsibility
02:38:13 or is there significant limitations
02:38:15 to what we’re able to do in terms of good and evil?
02:38:21 So there is a certain way if you are not terrible,
02:38:24 if you are committed to some kind of civilizational agency,
02:38:29 a next level agent that you are serving,
02:38:31 some kind of transcendent principle.
02:38:34 In the eyes of that transcendental principle,
02:38:36 you are able to discern good from evil.
02:38:38 Otherwise you cannot,
02:38:39 otherwise you have just individual aesthetics.
02:38:41 The cat that is torturing a mouse is not evil
02:38:44 because the cat does not envision
02:38:46 or no part of the world of the cat is envisioning a world
02:38:50 where there is no violence and nobody is suffering.
02:38:53 If you have an aesthetic where you want
02:38:55 to protect innocence,
02:38:56 then torturing somebody needlessly is evil,
02:39:00 but only then.
02:39:02 No, but within, I guess the question is within the aesthetic,
02:39:05 like within your sense of what is good and evil,
02:39:10 are we still, it seems like we’re still able
02:39:14 to commit evil.
02:39:17 Yes, so basically if you are committing
02:39:19 to this next level agent,
02:39:20 you are not necessarily are this next level agent, right?
02:39:23 You are a part of it.
02:39:24 You have a relationship to it,
02:39:26 like the cell does to its organism, its hyperorganism.
02:39:29 And it only exists to the degree
02:39:31 that it’s being implemented by you and others.
02:39:34 And that means that you’re not completely fully serving it.
02:39:38 You have freedom in what you decide,
02:39:40 whether you are acting on your impulses
02:39:42 and local incentives and your farewell impulses,
02:39:44 so to speak, or whether you’re committing to it.
02:39:47 And what you perceive then is a tension
02:39:49 between what you would be doing with respect
02:39:53 to the thing that you recognize as the sacred, if you do,
02:39:57 and what you’re actually doing.
02:39:58 And this is the line between good and evil,
02:40:01 right where you see, oh, I’m here acting
02:40:03 on my local incentives or impulses,
02:40:05 and here I’m acting on what I consider to be sacred.
02:40:08 And there’s a tension between those.
02:40:09 And this is the line between good and evil
02:40:11 that might run through your heart.
02:40:14 And if you don’t have that,
02:40:15 if you don’t have this relationship
02:40:17 to a transcendental agent,
02:40:18 you could call this relationship
02:40:19 to the next level agent soul, right?
02:40:21 It’s not a thing.
02:40:22 It’s not an immortal thing that is intrinsically valuable.
02:40:25 It’s a certain kind of relationship
02:40:27 that you project to understand what’s happening.
02:40:29 Somebody is serving this transcendental sacredness
02:40:31 or they’re not.
02:40:33 If you don’t have a soul, you cannot be evil.
02:40:35 You’re just a complex natural phenomenon.
02:40:39 So if you look at life, like starting today
02:40:42 or starting tomorrow, when we leave here today,
02:40:46 there’s a bunch of trajectories
02:40:48 that you can take through life, maybe countless.
02:40:53 Do you think some of these trajectories,
02:40:57 in your own conception of yourself,
02:40:59 some of those trajectories are the ideal life,
02:41:04 a life that if you were to be the hero of your life story,
02:41:09 you would want to be?
02:41:10 Like, is there some Josh or Bhakti you’re striving to be?
02:41:14 Like, this is the question I ask myself
02:41:15 as an individual trying to make a better world
02:41:20 in the best way that I could conceive of.
02:41:22 What is my responsibility there?
02:41:24 And how much am I responsible for the failure to do so?
02:41:28 Because I’m lazy and incompetent too often.
02:41:33 In my own perception.
02:41:35 In my own worldview, I’m not very important.
02:41:38 So it’s, I don’t have place for me as a hero
02:41:41 in my own world.
02:41:43 I’m trying to do the best that I can,
02:41:45 which is often not very good.
02:41:48 And so it’s not important for me to have status
02:41:52 or to be seen in a particular way.
02:41:55 It’s helpful if others can see me
02:41:57 or a few people can see me that can be my friends.
02:41:59 No, sorry, I want to clarify,
02:42:01 the hero I didn’t mean status or perception
02:42:05 or like some kind of marketing thing,
02:42:09 but more in private, in the quiet of your own mind.
02:42:14 Is there the kind of man you want to be
02:42:16 and would consider it a failure if you don’t become that?
02:42:20 That’s what I meant by hero.
02:42:21 Yeah, not really.
02:42:23 I don’t perceive myself as having such an identity.
02:42:26 And it’s also sometimes frustrating,
02:42:32 but it’s basically a lack of having this notion
02:42:37 of father that I need to be emulating.
02:42:44 It’s interesting.
02:42:44 I mean, it’s the leaf floating down the river.
02:42:48 I worry that…
02:42:50 Sometimes it’s more like being the river.
02:42:59 I’m just a fat frog sitting on a leaf
02:43:02 on a dirty, muddy lake.
02:43:06 I wish I was waiting for a princess to kiss me.
02:43:13 Or the other way, I forgot which way it goes.
02:43:15 Somebody kisses somebody.
02:43:17 I can ask you, I don’t know if you know
02:43:20 who Michael Malice is,
02:43:21 but in terms of constructing since systems of incentives,
02:43:27 it’s interesting to ask.
02:43:29 I don’t think I’ve talked to you about this before.
02:43:33 Malice espouses anarchism.
02:43:35 So he sees all government as fundamentally
02:43:40 getting in the way or even being destructive
02:43:42 to collaborations between human beings thriving.
02:43:49 What do you think?
02:43:50 What’s the role of government in a society that thrives?
02:43:56 Is anarchism at all compelling to you as a system?
02:44:00 So like not just small government,
02:44:02 but no government at all.
02:44:05 Yeah, I don’t see how this would work.
02:44:09 The government is an agent that imposes an offset
02:44:12 on your reward function, on your payout metrics.
02:44:15 So your behavior becomes compatible with the common good.
02:44:20 So the argument there is that you can have collectives
02:44:25 like governing organizations, but not government,
02:44:28 like where you’re born in a particular set of land
02:44:32 and therefore you must follow this rule or else.
02:44:38 You’re forced by what they call violence
02:44:41 because there’s an implied violence here.
02:44:44 So the key aspect of government is it protects you
02:44:52 from the rest of the world with an army and with police.
02:44:56 So it has a monopoly on violence.
02:45:00 It’s the only one that’s able to do violence.
02:45:02 So there are many forms of government,
02:45:03 not all governments do that.
02:45:05 But we find that in successful countries,
02:45:09 the government has a monopoly on violence.
02:45:12 And that means that you cannot get ahead
02:45:15 by starting your own army because the government
02:45:17 will come down on you and destroy you
02:45:19 if you try to do that.
02:45:20 And in countries where you can build your own army
02:45:23 and get away with it, some people will do it.
02:45:25 And these countries is what we call failed countries
02:45:28 in a way.
02:45:30 And if you don’t want to have violence,
02:45:33 the point is not to appeal to the moral intentions of people
02:45:36 because some people will use strategies
02:45:39 if they get ahead with them that feel a particular kind
02:45:41 of ecological niche.
02:45:42 So you need to destroy that ecological niche.
02:45:45 And if effective government has a monopoly on violence,
02:45:50 it can create a world where nobody is able to use violence
02:45:53 and get ahead.
02:45:54 So you want to use that monopoly on violence,
02:45:57 not to exert violence, but to make violence impossible,
02:46:00 to raise the cost of violence.
02:46:02 So people need to get ahead with nonviolent means.
02:46:06 So the idea is that you might be able to achieve that
02:46:09 in an anarchist state with companies.
02:46:12 So with the forces of capitalism is create security companies
02:46:18 where the one that’s most ethically sound rises to the top.
02:46:21 Basically, it would be a much better representative
02:46:24 of the people because there is a less sort of stickiness
02:46:29 to the big military force sticking around
02:46:33 even though it’s long overlived, outlived.
02:46:36 So you have groups of militants that are hopefully
02:46:40 efficiently organized because otherwise they’re going
02:46:41 to lose against the other groups of militants
02:46:44 and they are coordinating themselves with the rest
02:46:47 of society until they are having a monopoly on violence.
02:46:51 How is that different from a government?
02:46:53 So it’s basically converging to the same thing.
02:46:56 So I was trying to argue with Malice,
02:47:00 I feel like it always converges towards government at scale,
02:47:03 but I think the idea is you can have a lot of collectives
02:47:06 that are, you basically never let anything scale too big.
02:47:11 So one of the problems with governments is it gets too big
02:47:15 in terms of like the size of the group
02:47:19 over which it has control.
02:47:23 My sense is that would happen anyway.
02:47:27 So a successful company like Amazon or Facebook,
02:47:30 I mean, it starts forming a monopoly
02:47:33 over the entire populations,
02:47:36 not over just the hundreds of millions,
02:47:37 but billions of people.
02:47:39 So I don’t know, but there is something
02:47:43 about the abuses of power the government can have
02:47:46 when it has a monopoly on violence, right?
02:47:49 And so that’s a tension there, but…
02:47:53 So the question is how can you set the incentives
02:47:55 for government correctly?
02:47:56 And this mostly applies at the highest levels of government
02:47:59 and because we haven’t found a way to set them correctly,
02:48:02 we made the highest levels of government relatively weak.
02:48:06 And this is, I think, part of the reason
02:48:08 why we had difficulty to coordinate the pandemic response
02:48:12 and China didn’t have that much difficulty.
02:48:14 And there is, of course, a much higher risk
02:48:17 of the abuse of power that exists in China
02:48:19 because the power is largely unchecked.
02:48:22 And that’s basically what happens
02:48:25 in the next generation, for instance.
02:48:26 Imagine that we would agree
02:48:28 that the current government of China is largely correct
02:48:30 and benevolent, and maybe we don’t agree on this,
02:48:33 but if we did, how can we make sure
02:48:36 that this stays like this?
02:48:37 And if you don’t have checks and balances,
02:48:40 division of power, it’s hard to achieve.
02:48:42 You don’t have a solution for that problem.
02:48:45 But the abolishment of government
02:48:47 basically would remove the control structure.
02:48:49 From a cybernetic perspective,
02:48:51 there is an optimal point in the system
02:48:54 that the regulation should be happening, right?
02:48:56 That you can measure the current incentives
02:48:59 and the regulator would be properly incentivized
02:49:01 to make the right decisions
02:49:03 and change the payout metrics of everything below it
02:49:06 in such a way that the local prisoners dilemmas
02:49:08 get resolved, right?
02:49:09 You cannot resolve the prisoners dilemma
02:49:12 without some kind of eternal control
02:49:14 that emulates an infinite game in a way.
02:49:19 Yeah, I mean, there’s a sense in which
02:49:22 it seems like the reason government,
02:49:24 the parts of government that don’t work well currently
02:49:27 is because there’s not good mechanisms
02:49:34 through which to interact,
02:49:35 for the citizenry to interact with government
02:49:37 is basically it hasn’t caught up in terms of technology.
02:49:41 And I think once you integrate
02:49:43 some of the digital revolution
02:49:46 of being able to have a lot of access to data,
02:49:48 be able to vote on different ideas at a local level,
02:49:52 at all levels, at the optimal level
02:49:54 like you’re saying that can resolve the prisoner dilemmas
02:49:58 and to integrate AI to help you automate things
02:50:01 that don’t require the human ingenuity.
02:50:07 I feel like that’s where government could operate that well
02:50:10 and can also break apart the inefficient bureaucracies
02:50:14 if needed.
02:50:15 There’ll be a strong incentive to be efficient and successful.
02:50:20 So out human history, we see an evolution
02:50:23 and evolutionary competition of modes of government
02:50:25 and of individual governments is in these modes.
02:50:28 And every nation state in some sense
02:50:29 is some kind of organism that has found different solutions
02:50:33 for the problem of government.
02:50:34 And you could look at all these different models
02:50:37 and the different scales at which it exists
02:50:39 as empirical attempts to validate the idea
02:50:43 of how to build a better government.
02:50:45 And I suspect that the idea of anarchism
02:50:49 similar to the idea of communism
02:50:51 is the result of being disenchanted
02:50:54 with the ugliness of the real existing solutions
02:50:57 and the attempt to get to an utopia.
02:51:00 And I suspect that communism originally was not a utopia.
02:51:04 I think that in the same way as original Christianity,
02:51:07 it had a particular kind of vision.
02:51:10 And this vision is a society,
02:51:12 a mode of organization within the society
02:51:15 in which humans can coexist at scale without coercion.
02:51:20 In the same way as we do in a healthy family, right?
02:51:23 In a good family,
02:51:24 you don’t terrorize each other into compliance,
02:51:28 but you understand what everybody needs
02:51:30 and what everybody is able to contribute
02:51:32 and what the intended future of the whole thing is.
02:51:35 And everybody coordinates their behavior in the right way
02:51:38 and informs each other about how to do this.
02:51:40 And all the interactions that happen
02:51:42 are instrumental to making that happen, right?
02:51:45 Could this happen at scale?
02:51:47 And I think this is the idea of communism.
02:51:49 Communism is opposed to the idea
02:51:51 that we need economic terror
02:51:53 or other forms of terror to make that happen.
02:51:55 But in practice, what happened
02:51:56 is that the proto communist countries,
02:51:59 the real existing socialism,
02:52:01 replaced a part of the economic terror with moral terror,
02:52:04 right?
02:52:05 So we were told to do the right thing for moral reasons.
02:52:07 And of course it didn’t really work
02:52:09 and the economy eventually collapsed.
02:52:11 And the moral terror had actual real cost, right?
02:52:14 People were in prison
02:52:15 because they were morally noncompliant.
02:52:17 And the other thing is that the idea of communism
02:52:22 became a utopia.
02:52:24 So it basically was projected into the afterlife.
02:52:26 We were told in my childhood
02:52:28 that communism was a hypothetical society
02:52:31 to which we were in a permanent revolution
02:52:33 that justified everything
02:52:34 that was presently wrong with society morally.
02:52:37 But it was something that our grandchildren
02:52:39 probably would not ever see
02:52:41 because it was too ideal and too far in the future
02:52:43 to make it happen right now.
02:52:44 And people were just not there yet morally.
02:52:47 And the same thing happened with Christianity, right?
02:52:50 This notion of heaven was mythologized
02:52:52 and projected into an afterlife.
02:52:54 And I think this was just the idea of God’s kingdom
02:52:56 of this world in which we instantiate
02:52:59 the next level transcendental agent in the perfect form.
02:53:01 So everything goes smoothly and without violence
02:53:04 and without conflict and without this human messiness
02:53:07 on this economic messiness and the terror and coercion
02:53:11 that existed in the present societies.
02:53:13 And the idea of that the humans can exist at some point
02:53:16 exist at scale in a harmonious way and noncoercively
02:53:20 is untested, right?
02:53:21 A lot of people tested it
02:53:23 but didn’t get it to work so far.
02:53:25 And the utopia is a world in where you get
02:53:27 all the good things without any of the bad things.
02:53:30 And you are, I think very susceptible to believe in utopias
02:53:34 when you are very young and don’t understand
02:53:36 that everything has to happen in causal patterns,
02:53:39 that there’s always feedback loops
02:53:40 that ultimately are closed.
02:53:42 There’s nothing that just happens
02:53:44 because it’s good or bad.
02:53:45 Good or bad don’t exist in isolation.
02:53:47 They only exist with respect to larger systems.
02:53:50 So can you intuit why utopias fail as systems?
02:53:57 So like having a utopia that’s out there beyond the horizon
02:54:01 is it because then,
02:54:04 it’s not only because it’s impossible to achieve utopias
02:54:08 but it’s because what certain humans,
02:54:11 certain small number of humans start to sort of greedily
02:54:20 attain power and money and control and influence
02:54:25 as they become,
02:54:28 as they see the power in using this idea of a utopia
02:54:34 for propaganda.
02:54:35 It’s a bit like saying, why is my garden not perfect?
02:54:37 It’s because some evil weeds are overgrowing it
02:54:39 and they always do, right?
02:54:41 But this is not how it works.
02:54:43 A good garden is a system that is in balance
02:54:45 and requires minimal interactions by the gardener.
02:54:48 And so you need to create a system
02:54:51 that is designed to self stabilize.
02:54:54 And the design of social systems
02:54:55 requires not just the implementation
02:54:57 of the desired functionality,
02:54:58 but the next level design, also in biological systems.
02:55:01 You need to create a system that wants to converge
02:55:04 to the intended function.
02:55:06 And so instead of just creating an institution like the FDA
02:55:09 that is performing a particular kind of role in society,
02:55:13 you need to make sure that the FDA is actually driven
02:55:15 by a system that wants to do this optimally,
02:55:18 that is incentivized to do it optimally
02:55:19 and then makes the performance that is actually enacted
02:55:23 in every generation instrumental to that thing,
02:55:26 that actual goal, right?
02:55:27 And that is much harder to design and to achieve.
02:55:30 See if the design a system where,
02:55:32 and listen communism also was quote unquote incentivized
02:55:36 to be a feedback loop system that achieves that utopia.
02:55:43 It’s just, it wasn’t working given human nature.
02:55:45 The incentives were not correct given human nature.
02:55:47 How do you incentivize people
02:55:50 when they are getting coal off the ground
02:55:52 to work as hard as possible?
02:55:53 Because it’s a terrible job
02:55:55 and it’s very bad for your health.
02:55:57 And right, how do you do this?
02:55:59 And you can give them prices and medals and status
02:56:03 to some degree, right?
02:56:04 There’s only so much status to give for that.
02:56:06 And most people will not fall for this, right?
02:56:09 Or you can pay them and you probably have to pay them
02:56:12 in an asymmetric way because if you pay everybody the same
02:56:15 and you nationalize the coal mines,
02:56:19 eventually people will figure out
02:56:20 that they can game the system.
02:56:21 Yes, so you’re describing capitalism.
02:56:25 So capitalism is the present solution to the system.
02:56:28 And what we also noticed that I think that Marx was correct
02:56:32 in saying that capitalism is prone to crisis,
02:56:35 that capitalism is a system that in its dynamics
02:56:38 is not convergent, but divergent.
02:56:40 It’s not a stable system.
02:56:42 And that eventually it produces an enormous potential
02:56:47 for productivity, but it also is systematically
02:56:50 misallocating resources.
02:56:52 So a lot of people cannot participate
02:56:54 in the production and consumption anymore, right?
02:56:57 And this is what we observed.
02:56:58 We observed that the middle class in the US is tiny.
02:57:01 It’s a lot of people think that they’re middle class,
02:57:05 but if you are still flying economy,
02:57:07 you’re not middle class, right?
02:57:11 Every class is a magnitude smaller than the previous class.
02:57:14 And I think about classes is really like airline class.
02:57:23 I like class.
02:57:25 A lot of people are economy class, business class,
02:57:28 and very few are first class and some are budget.
02:57:30 I mean, some, I understand.
02:57:32 I think there’s, yeah, maybe some people,
02:57:36 probably I would push back
02:57:38 against that definition of the middle class.
02:57:39 It does feel like the middle class is pretty large,
02:57:41 but yes, there’s a discrepancy in terms of wealth.
02:57:45 So if you think about in terms of the productivity
02:57:48 that our society could have,
02:57:50 there is no reason for anybody to fly economy, right?
02:57:53 We would be able to let everybody travel in style.
02:57:57 Well, but also some people like to be frugal
02:58:00 even when they’re billionaires, okay?
02:58:01 So like that, let’s take that into account.
02:58:04 I mean, we probably don’t need to be a traveling lavish,
02:58:07 but you also don’t need to be tortured, right?
02:58:09 There is a difference between frugal
02:58:11 and subjecting yourself to torture.
02:58:14 Listen, I love economy.
02:58:15 I don’t understand why you’re comparing
02:58:16 a fly economy to torture.
02:58:19 I don’t, although the fight here,
02:58:22 there’s two crying babies next to me.
02:58:24 So that, but that has nothing to do with economy.
02:58:26 It has to do with crying babies.
02:58:28 They’re very cute though.
02:58:29 So they kind of.
02:58:30 Yeah, I have two kids
02:58:31 and sometimes I have to go back to visit the grandparents.
02:58:35 And that means going from the west coast to Germany
02:58:41 and that’s a long flight.
02:58:42 Is it true that, so when you’re a father,
02:58:45 you grow immune to the crying and all that kind of stuff,
02:58:48 like the, because like me just not having kids,
02:58:52 it can be other people’s kids can be quite annoying
02:58:54 when they’re crying and screaming
02:58:55 and all that kind of stuff.
02:58:57 When you have children and you are wired up
02:58:59 in the default natural way,
02:59:01 you’re lucky in this regard, you fall in love with them.
02:59:04 And this falling in love with them means
02:59:06 that you basically start to see the world through their eyes
02:59:10 and you understand that in a given situation,
02:59:12 they cannot do anything but being expressing despair.
02:59:17 And so it becomes more differentiated.
02:59:19 I noticed that for instance,
02:59:21 my son is typically acting on a pure experience
02:59:25 of what things are like right now
02:59:28 and he has to do this right now.
02:59:30 And you have this small child that is,
02:59:33 when he was a baby and so on,
02:59:35 where he was just immediately expressing what he felt.
02:59:37 And if you cannot regulate this from the outside,
02:59:39 there’s no point to be upset about it, right?
02:59:42 It’s like dealing with weather or something like this.
02:59:45 You all have to get through it
02:59:46 and it’s not easy for him either.
02:59:48 But if you also have a daughter,
02:59:51 maybe she is planning for that.
02:59:53 Maybe she understands that she’s sitting in the car
02:59:57 behind you and she’s screaming at the top of her lungs
02:59:59 and you’re almost doing an accident
03:00:01 and you really don’t know what to do.
03:00:03 What should I have done to make you stop screaming?
03:00:06 You could have given me candy.
03:00:10 I think that’s like a cat versus dog discussion.
03:00:12 I love it.
03:00:13 Cause you said like a fundamental aspect of that is love
03:00:19 that makes it all worth it.
03:00:21 What, in this monkey riding an elephant in a dream world,
03:00:26 what role does love play in the human condition?
03:00:31 I think that love is the facilitator
03:00:33 of non transactional interaction.
03:00:37 And you are observing your own purposes.
03:00:40 Some of these purposes go beyond your ego.
03:00:42 They go beyond the particular organism
03:00:45 that you are and your local interests.
03:00:46 That’s what you mean by non transactional.
03:00:48 Yes, so basically when you are acting
03:00:50 in a transactional way, it means that you are respecting
03:00:52 something in return for you
03:00:55 from the one that you’re interacting with.
03:00:58 You are interacting with a random stranger,
03:00:59 you buy something from them on eBay,
03:01:01 you expect a fair value for the money that you sent them
03:01:03 and vice versa.
03:01:05 Because you don’t know that person,
03:01:06 you don’t have any kind of relationship to them.
03:01:09 But when you know this person a little bit better
03:01:10 and you know the situation that they’re in,
03:01:12 you understand what they try to achieve in their life
03:01:14 and you approve because you realize that they’re
03:01:17 in some sense serving the same human sacredness as you are.
03:01:22 And they need to think that you have,
03:01:23 maybe you give it to them as a present.
03:01:26 But, I mean, the feeling itself of joy is a kind of benefit,
03:01:32 is a kind of transaction, like…
03:01:34 Yes, but the joy is not the point.
03:01:36 The joy is the signal that you get.
03:01:38 It’s the reinforcement signal that your brain sends to you
03:01:40 because you are acting on the incentives
03:01:43 of the agent that you’re a part of.
03:01:45 We are meant to be part of something larger.
03:01:48 This is the way in which we out competed other hominins.
03:01:54 Take that Neanderthals.
03:01:56 Yeah, right.
03:01:57 And also other humans.
03:01:59 There was a population bottleneck for human society
03:02:03 that leads to an extreme lack of genetic diversity
03:02:06 among humans.
03:02:07 If you look at Bushmen in the Kalahari,
03:02:11 that basically tribes that are not that far distant
03:02:13 to each other have more genetic diversity
03:02:15 than exists between Europeans and Chinese.
03:02:19 And that’s because basically the out of Africa population
03:02:23 at some point had a bottleneck
03:02:25 of just a few thousand individuals.
03:02:27 And what probably happened is not that at any time
03:02:30 the number of people shrank below a few hundred thousand.
03:02:34 What probably happened is that there was a small group
03:02:37 that had a decisive mutation that produced an advantage.
03:02:40 And this group multiplied and killed everybody else.
03:02:44 And we are descendants of that group.
03:02:46 Yeah, I wonder what the peculiar characteristics
03:02:50 of that group.
03:02:52 Yeah.
03:02:53 I mean, we can never know.
03:02:53 Me too, and a lot of people do.
03:02:55 We can only just listen to the echoes in ours,
03:02:58 like the ripples that are still within us.
03:03:01 So I suspect what eventually made a big difference
03:03:04 was the ability to organize at scale,
03:03:07 to program each other.
03:03:09 With ideas.
03:03:11 That we became programmable,
03:03:12 that we were willing to work in lockstep,
03:03:14 that we went above the tribal level,
03:03:17 that we no longer were groups of a few hundred individuals
03:03:20 and acted on direct reputation systems transactionally,
03:03:24 but that we basically evolved an adaptation
03:03:27 to become state building.
03:03:28 Yeah.
03:03:31 To form collectives outside of the direct collectives.
03:03:35 Yes, and that’s basically a part of us became committed
03:03:38 to serving something outside of what we know.
03:03:41 Yeah, then that’s kind of what love is.
03:03:44 And it’s terrifying because it meant
03:03:45 that we eradicated the others.
03:03:48 Right, it’s a force.
03:03:49 It’s an adaptive force that gets us ahead in evolution,
03:03:52 which means we displace something else
03:03:54 that doesn’t have that.
03:03:56 Oh, so we had to murder a lot of people
03:03:58 that weren’t about love.
03:04:00 So love led to destruction.
03:04:01 They didn’t have the same strong love as we did.
03:04:04 Right, that’s why I mentioned this thing with fascism.
03:04:07 When you see these speeches, do you want total war?
03:04:12 And everybody says, yes, right?
03:04:14 This is this big, oh my God, we are part of something
03:04:17 that is more important than me
03:04:18 that gives meaning to my existence.
03:04:22 Fair enough.
03:04:27 Do you have advice for young people today
03:04:30 in high school, in college,
03:04:33 that are thinking about what to do with their career,
03:04:37 with their life, so that at the end of the whole thing,
03:04:40 they can be proud of what they did?
03:04:43 Don’t cheat.
03:04:45 Have integrity, aim for integrity.
03:04:48 So what does integrity look like when you’re at the river
03:04:50 or the leaf or the fat frog in a lake?
03:04:54 It basically means that you try to figure out
03:04:57 what the thing is that is the most right.
03:05:02 And this doesn’t mean that you have to look
03:05:04 for what other people tell you what’s right,
03:05:07 but you have to aim for moral autonomy.
03:05:09 So things need to be right independently
03:05:12 of what other people say.
03:05:14 I always felt that when people told me
03:05:17 to listen to what others say, like read the room,
03:05:22 build your ideas of what’s true
03:05:25 based on the high status people of your in group,
03:05:27 that does not protect me from fascism.
03:05:29 The only way to protect yourself from fascism
03:05:31 is to decide it’s the world that is being built here,
03:05:35 the world that I want to be in.
03:05:37 And so in some sense, try to make your behavior sustainable,
03:05:41 act in such a way that you would feel comfortable
03:05:44 on all sides of the transaction.
03:05:46 Realize that everybody is you in a different timeline,
03:05:48 but is seeing things differently
03:05:51 and has reasons to do so.
03:05:53 Yeah, I’ve come to realize this recently,
03:05:58 that there is an inner voice
03:05:59 that tells you what’s right and wrong.
03:06:02 And speaking of reading the room,
03:06:06 there’s times what integrity looks like
03:06:08 is there’s times when a lot of people
03:06:10 are doing something wrong.
03:06:12 And what integrity looks like
03:06:13 is not going on Twitter and tweeting about it,
03:06:16 but not participating quietly, not doing.
03:06:20 So it’s not like signaling or not all this kind of stuff,
03:06:24 but actually living your, what you think is right.
03:06:28 Like living it, not signaling.
03:06:28 There’s also sometimes this expectation
03:06:30 that others are like us.
03:06:32 So imagine the possibility
03:06:34 that some of the people around you are space aliens
03:06:37 that only look human, right?
03:06:39 So they don’t have the same prayers as you do.
03:06:41 They don’t have the same impulses
03:06:44 that’s what’s right and wrong.
03:06:45 There’s a large diversity in these basic impulses
03:06:48 that people can have in a given situation.
03:06:51 And now realize that you are a space alien, right?
03:06:54 You are not actually human.
03:06:55 You think that you are human,
03:06:57 but you don’t know what it means,
03:06:58 like what it’s like to be human.
03:07:00 You just make it up as you go along like everybody else.
03:07:04 And you have to figure that out,
03:07:05 what it means that you are a full human being,
03:07:09 what it means to be human in the world
03:07:11 and how to connect with others on that.
03:07:13 And there is also something, don’t be afraid
03:07:17 in the sense that if you do this, you’re not good enough.
03:07:20 Because if you are acting on these incentives of integrity,
03:07:23 you become trustworthy.
03:07:25 That’s the way in which you can recognize each other.
03:07:28 There is a particular place where you can meet.
03:07:30 You can figure out what that place is,
03:07:33 where you will give support to people
03:07:35 because you realize that they act with integrity
03:07:38 and they will also do that.
03:07:40 So in some sense, you are safe if you do that.
03:07:43 You’re not always protected.
03:07:44 There are people which will abuse you
03:07:47 and that are bad actors in a way
03:07:49 that it’s hard to imagine before you meet them.
03:07:52 But there is also people which will try to protect you.
03:07:57 Yeah, that’s such a, thank you for saying that.
03:08:00 That’s such a hopeful message
03:08:03 that no matter what happens to you,
03:08:05 there’ll be a place, there’s people you’ll meet
03:08:11 that also have what you have
03:08:15 and you will find happiness there and safety there.
03:08:20 Yeah, but it doesn’t need to end well.
03:08:21 It can also all go wrong.
03:08:23 So there’s no guarantees in this life.
03:08:26 So you can do everything right and you still can fail
03:08:29 and you can see horrible things happening to you
03:08:32 that traumatize you and mutilate you
03:08:35 and you have to be grateful if it doesn’t happen.
03:08:40 And ultimately be grateful no matter what happens
03:08:42 because even just being alive is pretty damn nice.
03:08:46 Yeah, even that, you know.
03:08:49 The gratefulness in some sense is also just generated
03:08:52 by your brain to keep you going, it’s all the trick.
03:08:58 Speaking of which, Camus said,
03:09:02 I see many people die because they judge
03:09:05 that life is not worth living.
03:09:08 I see others paradoxically getting killed
03:09:10 for the ideas or illusions that give them
03:09:12 a reason for living.
03:09:15 What is called the reason for living
03:09:16 is also an excellent reason for dying.
03:09:19 I therefore conclude that the meaning of life
03:09:22 is the most urgent of questions.
03:09:24 So I have to ask what Jascha Bach is the meaning of life?
03:09:31 It is an urgent question according to Camus.
03:09:35 I don’t think that there’s a single answer to this.
03:09:37 Nothing makes sense unless the mind makes it so.
03:09:41 So you basically have to project a purpose.
03:09:44 And if you zoom out far enough,
03:09:47 there’s the heat test of the universe
03:09:49 and everything is meaningless,
03:09:50 everything is just a blip in between.
03:09:52 And the question is, do you find meaning
03:09:54 in this blip in between?
03:09:55 Do you find meaning in observing squirrels?
03:09:59 Do you find meaning in raising children
03:10:01 and projecting a multi generational organism
03:10:04 into the future?
03:10:05 Do you find meaning in projecting an aesthetic
03:10:08 of the world that you like to the future
03:10:10 and trying to serve that aesthetic?
03:10:12 And if you do, then life has that meaning.
03:10:15 And if you don’t, then it doesn’t.
03:10:17 I kind of enjoy the idea that you just create
03:10:21 the most vibrant, the most weird,
03:10:25 the most unique kind of blip you can,
03:10:28 given your environment, given your set of skills,
03:10:32 just be the most weird set of,
03:10:38 like local pocket of complexity you can be.
03:10:41 So that like, when people study the universe,
03:10:44 they’ll pause and be like, oh, that’s weird.
03:10:47 It looks like a useful strategy,
03:10:50 but of course it’s still motivated reasoning.
03:10:52 You’re obviously acting on your incentives here.
03:10:57 It’s still a story we tell ourselves within a dream
03:11:00 that’s hardly in touch with the reality.
03:11:03 It’s definitely a good strategy if you are a podcaster.
03:11:10 And a human, which I’m still trying to figure out if I am.
03:11:13 It has a mutual relationship somehow.
03:11:15 Somehow.
03:11:16 Josh, you’re one of the most incredible people I know.
03:11:20 I really love talking to you.
03:11:22 I love talking to you again,
03:11:23 and it’s really an honor that you spend
03:11:26 your valuable time with me.
03:11:27 I hope we get to talk many times
03:11:28 through our short and meaningless lives.
03:11:33 Or meaningful.
03:11:34 Or meaningful.
03:11:35 Thank you, Alex.
03:11:36 I enjoyed this conversation very much.
03:11:39 Thanks for listening to this conversation with Josche Bach.
03:11:41 A thank you to Coinbase, Codecademy, Linode,
03:11:45 NetSuite, and ExpressVPN.
03:11:48 Check them out in the description to support this podcast.
03:11:52 Now, let me leave you with some words from Carl Jung.
03:11:55 People will do anything, no matter how absurd,
03:11:59 in order to avoid facing their own souls.
03:12:01 One does not become enlightened
03:12:03 by imagining figures of light,
03:12:05 but by making the darkness conscious.
03:12:09 Thank you for listening, and hope to see you next time.