Transcript
00:00:00 The following is a conversation with Peter Singer,
00:00:03 professor of bioethics at Princeton University,
00:00:06 best known for his 1975 book, Animal Liberation,
00:00:10 that makes an ethical case against eating meat.
00:00:14 He has written brilliantly from an ethical perspective
00:00:17 on extreme poverty, euthanasia, human genetic selection,
00:00:21 sports doping, the sale of kidneys,
00:00:23 and generally happiness, including in his books,
00:00:28 Ethics in the Real World, and The Life You Can Save.
00:00:32 He was a key popularizer of the effective altruism movement
00:00:36 and is generally considered one of the most influential
00:00:39 philosophers in the world.
00:00:42 Quick summary of the ads.
00:00:43 Two sponsors, Cash App and Masterclass.
00:00:47 Please consider supporting the podcast
00:00:48 by downloading Cash App and using code LexPodcast
00:00:52 and signing up at masterclass.com slash Lex.
00:00:55 Click the links, buy the stuff.
00:00:57 It really is the best way to support the podcast
00:01:00 and the journey I’m on.
00:01:02 As you may know, I primarily eat a ketogenic or carnivore diet,
00:01:07 which means that most of my diet is made up of meat.
00:01:10 I do not hunt the food I eat, though one day I hope to.
00:01:15 I love fishing, for example.
00:01:17 Fishing and eating the fish I catch
00:01:19 has always felt much more honest than participating
00:01:23 in the supply chain of factory farming.
00:01:26 From an ethics perspective, this part of my life
00:01:29 has always had a cloud over it.
00:01:31 It makes me think.
00:01:33 I’ve tried a few times in my life
00:01:35 to reduce the amount of meat I eat.
00:01:37 But for some reason, whatever the makeup of my body,
00:01:41 whatever the way I practice the dieting I have,
00:01:44 I get a lot of mental and physical energy
00:01:48 and performance from eating meat.
00:01:50 So both intellectually and physically,
00:01:53 it’s a continued journey for me.
00:01:56 I return to Peter’s work often to reevaluate the ethics
00:02:00 of how I live this aspect of my life.
00:02:03 Let me also say that you may be a vegan
00:02:06 or you may be a meat eater and may be upset by the words I say
00:02:09 or Peter says, but I ask for this podcast
00:02:13 and other episodes of this podcast
00:02:16 that you keep an open mind.
00:02:18 I may and probably will talk with people you disagree with.
00:02:21 Please try to really listen, especially
00:02:25 to people you disagree with.
00:02:27 And give me and the world the gift
00:02:29 of being a participant in a patient, intelligent,
00:02:33 and nuanced discourse.
00:02:34 If your instinct and desire is to be a voice of mockery
00:02:38 towards those you disagree with, please unsubscribe.
00:02:42 My source of joy and inspiration here
00:02:44 has been to be a part of a community that thinks deeply
00:02:48 and speaks with empathy and compassion.
00:02:51 That is what I hope to continue being a part of
00:02:53 and I hope you join as well.
00:02:56 If you enjoy this podcast, subscribe on YouTube,
00:02:58 review it with five stars on Apple Podcast,
00:03:01 follow on Spotify, support on Patreon,
00:03:04 or connect with me on Twitter at Lex Friedman.
00:03:07 As usual, I’ll do a few minutes of ads now
00:03:09 and never any ads in the middle
00:03:11 that can break the flow of the conversation.
00:03:14 This show is presented by Cash App,
00:03:16 the number one finance app in the App Store.
00:03:18 When you get it, use code LEXPODCAST.
00:03:22 Cash App lets you send money to friends,
00:03:24 buy Bitcoin, and invest in the stock market
00:03:27 with as little as one dollar.
00:03:29 Since Cash App allows you to buy Bitcoin,
00:03:31 let me mention that cryptocurrency in the context
00:03:34 of the history of money is fascinating.
00:03:37 I recommend Ascent of Money
00:03:39 as a great book on this history.
00:03:41 Debits and credits on ledgers
00:03:43 started around 30,000 years ago.
00:03:45 The US dollar created over 200 years ago
00:03:48 and the first decentralized cryptocurrency
00:03:51 released just over 10 years ago.
00:03:53 So given that history, cryptocurrency is still very much
00:03:57 in its early days of development,
00:03:58 but it’s still aiming to and just might
00:04:01 redefine the nature of money.
00:04:04 So again, if you get Cash App from the App Store
00:04:07 or Google Play and use the code LEXPODCAST,
00:04:10 you get $10 and Cash App will also donate $10 to FIRST,
00:04:14 an organization that is helping to advance
00:04:16 robotic system education for young people around the world.
00:04:20 This show is sponsored by Masterclass.
00:04:23 Sign up at masterclass.com slash LEX
00:04:26 to get a discount and to support this podcast.
00:04:29 When I first heard about Masterclass,
00:04:31 I thought it was too good to be true.
00:04:33 For $180 a year, you get an all access pass
00:04:36 to watch courses from, to list some of my favorites,
00:04:40 Chris Hadfield on space exploration,
00:04:42 Neil Gauss Tyson on scientific thinking and communication,
00:04:46 Will Wright, creator of SimCity and Sims on game design.
00:04:50 I promise I’ll start streaming games at some point soon.
00:04:53 Carlos Santana on guitar, Gary Kasparov on chess,
00:04:57 Daniel Lagrano on poker and many more.
00:05:01 Chris Hadfield explaining how rockets work
00:05:04 and the experience of being launched into space alone
00:05:07 is worth the money.
00:05:08 By the way, you can watch it on basically any device.
00:05:12 Once again, sign up at masterclass.com slash LEX
00:05:16 to get a discount and to support this podcast.
00:05:19 And now, here’s my conversation with Peter Singer.
00:05:25 When did you first become conscious of the fact
00:05:27 that there is much suffering in the world?
00:05:32 I think I was conscious of the fact
00:05:33 that there’s a lot of suffering in the world
00:05:35 pretty much as soon as I was able to understand
00:05:38 anything about my family and its background
00:05:40 because I lost three of my four grandparents
00:05:44 in the Holocaust and obviously I knew
00:05:48 why I only had one grandparent
00:05:52 and she herself had been in the camps and survived,
00:05:54 so I think I knew a lot about that pretty early.
00:05:58 My entire family comes from the Soviet Union.
00:06:01 I was born in the Soviet Union.
00:06:05 World War II has deep roots in the culture
00:06:07 and the suffering that the war brought
00:06:10 the millions of people who died is in the music,
00:06:14 is in the literature, is in the culture.
00:06:16 What do you think was the impact
00:06:18 of the war broadly on our society?
00:06:25 The war had many impacts.
00:06:28 I think one of them, a beneficial impact,
00:06:31 is that it showed what racism
00:06:34 and authoritarian government can do
00:06:37 and at least as far as the West was concerned,
00:06:41 I think that meant that I grew up in an era
00:06:43 in which there wasn’t the kind of overt racism
00:06:48 and antisemitism that had existed for my parents in Europe.
00:06:52 I was growing up in Australia
00:06:53 and certainly that was clearly seen
00:06:57 as something completely unacceptable.
00:07:00 There was also, though, a fear of a further outbreak of war
00:07:05 which this time we expected would be nuclear
00:07:08 because of the way the Second World War had ended,
00:07:11 so there was this overshadowing of my childhood
00:07:16 about the possibility that I would not live to grow up
00:07:19 and be an adult because of a catastrophic nuclear war.
00:07:25 The film On the Beach was made
00:07:28 in which the city that I was living,
00:07:29 Melbourne, was the last place on Earth
00:07:32 to have living human beings
00:07:34 because of the nuclear cloud
00:07:36 that was spreading from the North,
00:07:38 so that certainly gave us a bit of that sense.
00:07:42 There were many, there were clearly many other legacies
00:07:45 that we got of the war as well
00:07:47 and the whole setup of the world
00:07:49 and the Cold War that followed.
00:07:51 All of that has its roots in the Second World War.
00:07:55 There is much beauty that comes from war.
00:07:58 Sort of, I had a conversation with Eric Weinstein.
00:08:01 He said everything is great about war
00:08:03 except all the death and suffering.
00:08:08 Do you think there’s something positive
00:08:11 that came from the war,
00:08:13 the mirror that it put to our society,
00:08:16 sort of the ripple effects on it, ethically speaking?
00:08:20 Do you think there are positive aspects to war?
00:08:24 I find it hard to see positive aspects in war
00:08:27 and some of the things that other people think of
00:08:30 as positive and beautiful may be questioning.
00:08:35 So there’s a certain kind of patriotism.
00:08:38 People say during wartime, we all pull together,
00:08:41 we all work together against a common enemy
00:08:44 and that’s true.
00:08:45 An outside enemy does unite a country
00:08:47 and in general, it’s good for countries to be united
00:08:49 and have common purposes
00:08:51 but it also engenders a kind of a nationalism
00:08:55 and a patriotism that can’t be questioned
00:08:57 and that I’m more skeptical about.
00:09:01 What about the brotherhood
00:09:04 that people talk about from soldiers?
00:09:08 The sort of counterintuitive, sad idea
00:09:12 that the closest that people feel to each other
00:09:16 is in those moments of suffering,
00:09:17 of being at the sort of the edge
00:09:20 of seeing your comrades dying in your arms.
00:09:24 That somehow brings people extremely closely together.
00:09:27 Suffering brings people closer together.
00:09:29 How do you make sense of that?
00:09:31 It may bring people close together
00:09:33 but there are other ways of bonding
00:09:36 and being close to people I think
00:09:37 without the suffering and death that war entails.
00:09:42 Perhaps you could see, you could already hear
00:09:44 the romanticized Russian in me.
00:09:48 We tend to romanticize suffering just a little bit
00:09:50 in our literature and culture and so on.
00:09:53 Could you take a step back
00:09:54 and I apologize if it’s a ridiculous question
00:09:57 but what is suffering?
00:09:59 If you would try to define what suffering is,
00:10:03 how would you go about it?
00:10:05 Suffering is a conscious state.
00:10:09 There can be no suffering for a being
00:10:11 who is completely unconscious
00:10:14 and it’s distinguished from other conscious states
00:10:17 in terms of being one that considered just in itself.
00:10:22 We would rather be without.
00:10:25 It’s a conscious state that we want to stop
00:10:27 if we’re experiencing or we want to avoid having again
00:10:31 if we’ve experienced it in the past.
00:10:34 And that’s, as I say, emphasized for its own sake
00:10:37 because of course people will say,
00:10:39 well, suffering strengthens the spirit.
00:10:41 It has good consequences.
00:10:44 And sometimes it does have those consequences
00:10:47 and of course sometimes we might undergo suffering.
00:10:50 We set ourselves a challenge to run a marathon
00:10:53 or climb a mountain or even just to go to the dentist
00:10:57 so that the toothache doesn’t get worse
00:10:59 even though we know the dentist is gonna hurt us
00:11:00 to some extent.
00:11:01 So I’m not saying that we never choose suffering
00:11:04 but I am saying that other things being equal,
00:11:07 we would rather not be in that state of consciousness.
00:11:10 Is the ultimate goal sort of,
00:11:12 you have the new 10 year anniversary release
00:11:15 of the Life You Can Save book, really influential book.
00:11:18 We’ll talk about it a bunch of times
00:11:20 throughout this conversation
00:11:21 but do you think it’s possible
00:11:25 to eradicate suffering or is that the goal
00:11:29 or do we want to achieve a kind of minimum threshold
00:11:36 of suffering and then keeping a little drop of poison
00:11:43 to keep things interesting in the world?
00:11:46 In practice, I don’t think we ever will eliminate suffering
00:11:50 so I think that little drop of poison as you put it
00:11:53 or if you like the contrasting dash of an unpleasant color
00:11:58 perhaps something like that
00:11:59 in a otherwise harmonious and beautiful composition,
00:12:04 that is gonna always be there.
00:12:07 If you ask me whether in theory
00:12:09 if we could get rid of it, we should.
00:12:12 I think the answer is whether in fact
00:12:14 we would be better off
00:12:17 or whether in terms of by eliminating the suffering
00:12:20 we would also eliminate some of the highs,
00:12:22 the positive highs and if that’s so
00:12:24 then we might be prepared to say
00:12:27 it’s worth having a minimum of suffering
00:12:30 in order to have the best possible experiences as well.
00:12:34 Is there a relative aspect to suffering?
00:12:37 So when you talk about eradicating poverty in the world,
00:12:42 is this the more you succeed,
00:12:44 the more the bar of what defines poverty raises
00:12:47 or is there at the basic human ethical level
00:12:51 a bar that’s absolute that once you get above it
00:12:55 then we can morally converge
00:13:00 to feeling like we have eradicated poverty?
00:13:04 I think they’re both and I think this is true for poverty
00:13:08 as well as suffering.
00:13:09 There’s an objective level of suffering or of poverty
00:13:14 where we’re talking about objective indicators
00:13:17 like you’re constantly hungry,
00:13:22 you can’t get enough food,
00:13:24 you’re constantly cold, you can’t get warm,
00:13:28 you have some physical pains that you’re never rid of.
00:13:32 I think those things are objective
00:13:35 but it may also be true that if you do get rid of it
00:13:38 if you do get rid of that and you get to the stage
00:13:40 where all of those basic needs have been met,
00:13:45 there may still be then new forms of suffering that develop
00:13:48 and perhaps that’s what we’re seeing
00:13:50 in the affluent societies we have
00:13:52 that people get bored for example,
00:13:55 they don’t need to spend so many hours a day earning money
00:13:58 to get enough to eat and shelter.
00:14:01 So now they’re bored, they lack a sense of purpose.
00:14:05 That can happen.
00:14:06 And that then is a kind of a relative suffering
00:14:10 that is distinct from the objective forms of suffering.
00:14:14 But in your focus on eradicating suffering,
00:14:17 you don’t think about that kind of,
00:14:19 the kind of interesting challenges and suffering
00:14:22 that emerges in affluent societies,
00:14:24 that’s just not, in your ethical philosophical brain,
00:14:28 is that of interest at all?
00:14:31 It would be of interest to me if we had eliminated
00:14:34 all of the objective forms of suffering,
00:14:36 which I think of as generally more severe
00:14:40 and also perhaps easier at this stage anyway
00:14:43 to know how to eliminate.
00:14:45 So yes, in some future state when we’ve eliminated
00:14:49 those objective forms of suffering,
00:14:50 I would be interested in trying to eliminate
00:14:53 the relative forms as well.
00:14:55 But that’s not a practical need for me at the moment.
00:14:59 Sorry to linger on it because you kind of said it,
00:15:02 but just is elimination the goal for the affluent society?
00:15:07 So is there, do you see suffering as a creative force?
00:15:14 Suffering can be a creative force.
00:15:17 I think repeating what I said about the highs
00:15:20 and whether we need some of the lows
00:15:22 to experience the highs.
00:15:24 So it may be that suffering makes us more creative
00:15:26 and we regard that as worthwhile.
00:15:29 Maybe that brings some of those highs with it
00:15:32 that we would not have had if we’d had no suffering.
00:15:36 I don’t really know.
00:15:37 Many people have suggested that
00:15:39 and I certainly can’t have no basis for denying it.
00:15:44 And if it’s true, then I would not want
00:15:47 to eliminate suffering completely.
00:15:50 But the focus is on the absolute,
00:15:54 not to be cold, not to be hungry.
00:15:56 Yes, that’s at the present stage
00:15:59 of where the world’s population is, that’s the focus.
00:16:03 Talking about human nature for a second,
00:16:06 do you think people are inherently good
00:16:08 or do we all have good and evil in us
00:16:11 that basically everyone is capable of evil
00:16:14 based on the environment?
00:16:17 Certainly most of us have potential for both good and evil.
00:16:21 I’m not prepared to say that everyone is capable of evil.
00:16:24 Maybe some people who even in the worst of circumstances
00:16:27 would not be capable of it,
00:16:28 but most of us are very susceptible
00:16:32 to environmental influences.
00:16:34 So when we look at things
00:16:36 that we were talking about previously,
00:16:37 let’s say what the Nazis did during the Holocaust,
00:16:43 I think it’s quite difficult to say,
00:16:46 I know that I would not have done those things
00:16:50 even if I were in the same circumstances
00:16:52 as those who did them.
00:16:54 Even if let’s say I had grown up under the Nazi regime
00:16:58 and had been indoctrinated with racist ideas,
00:17:02 had also had the idea that I must obey orders,
00:17:07 follow the commands of the Fuhrer,
00:17:11 plus of course perhaps the threat
00:17:12 that if I didn’t do certain things,
00:17:14 I might get sent to the Russian front
00:17:16 and that would be a pretty grim fate.
00:17:19 I think it’s really hard for anybody to say,
00:17:22 nevertheless, I know I would not have killed those Jews
00:17:26 or whatever else it was that they were.
00:17:28 Well, what’s your intuition?
00:17:29 How many people will be able to say that?
00:17:32 Truly to be able to say it,
00:17:34 I think very few, less than 10%.
00:17:37 To me, it seems a very interesting
00:17:39 and powerful thing to meditate on.
00:17:42 So I’ve read a lot about the war, World War II,
00:17:45 and I can’t escape the thought
00:17:47 that I would have not been one of the 10%.
00:17:51 Right, I have to say, I simply don’t know.
00:17:55 I would like to hope that I would have been one of the 10%,
00:17:59 but I don’t really have any basis
00:18:00 for claiming that I would have been different
00:18:04 from the majority.
00:18:06 Is it a worthwhile thing to contemplate?
00:18:09 It would be interesting if we could find a way
00:18:11 of really finding these answers.
00:18:13 There obviously is quite a bit of research
00:18:16 on people during the Holocaust,
00:18:19 on how ordinary Germans got led to do terrible things,
00:18:24 and there are also studies of the resistance,
00:18:28 some heroic people in the White Rose group, for example,
00:18:32 who resisted even though they knew
00:18:34 they were likely to die for it.
00:18:37 But I don’t know whether these studies
00:18:40 really can answer your larger question
00:18:43 of how many people would have been capable of doing that.
00:18:47 Well, sort of the reason I think is interesting
00:18:50 is in the world, as you described,
00:18:55 when there are things that you’d like to do that are good,
00:18:59 that are objectively good,
00:19:02 it’s useful to think about whether
00:19:04 I’m not willing to do something,
00:19:06 or I’m not willing to acknowledge something
00:19:09 as good and the right thing to do
00:19:10 because I’m simply scared of putting my life,
00:19:15 of damaging my life in some kind of way.
00:19:18 And that kind of thought exercise is helpful
00:19:20 to understand what is the right thing
00:19:23 in my current skill set and the capacity to do.
00:19:27 Sort of there’s things that are convenient,
00:19:30 and I wonder if there are things
00:19:31 that are highly inconvenient,
00:19:33 where I would have to experience derision,
00:19:35 or hatred, or death, or all those kinds of things,
00:19:39 but it’s truly the right thing to do.
00:19:41 And that kind of balance is,
00:19:43 I feel like in America, we don’t have,
00:19:46 it’s difficult to think in the current times,
00:19:50 it seems easier to put yourself back in history,
00:19:53 where you can sort of objectively contemplate
00:19:56 whether, how willing you are to do the right thing
00:19:59 when the cost is high.
00:20:03 True, but I think we do face those challenges today,
00:20:06 and I think we can still ask ourselves those questions.
00:20:09 So one stand that I took more than 40 years ago now
00:20:13 was to stop eating meat, become a vegetarian at a time
00:20:17 when you hardly met anybody who was a vegetarian,
00:20:21 or if you did, they might’ve been a Hindu,
00:20:23 or they might’ve had some weird theories
00:20:27 about meat and health.
00:20:30 And I know thinking about making that decision,
00:20:33 I was convinced that it was the right thing to do,
00:20:35 but I still did have to think,
00:20:37 are all my friends gonna think that I’m a crank
00:20:40 because I’m now refusing to eat meat?
00:20:43 So I’m not saying there were any terrible sanctions,
00:20:47 obviously, but I thought about that,
00:20:50 and I guess I decided,
00:20:51 well, I still think this is the right thing to do,
00:20:54 and I’ll put up with that if it happens.
00:20:56 And one or two friends were clearly uncomfortable
00:20:59 with that decision, but that was pretty minor
00:21:03 compared to the historical examples
00:21:05 that we’ve been talking about.
00:21:08 But other issues that we have around too,
00:21:09 like global poverty and what we ought to be doing about that
00:21:13 is another question where people, I think,
00:21:16 can have the opportunity to take a stand
00:21:19 on what’s the right thing to do now.
00:21:21 Climate change would be a third question
00:21:23 where, again, people are taking a stand.
00:21:25 I can look at Greta Thunberg there and say,
00:21:29 well, I think it must’ve taken a lot of courage
00:21:32 for a schoolgirl to say,
00:21:35 I’m gonna go on strike about climate change
00:21:37 and see what happens.
00:21:41 Yeah, especially in this divisive world,
00:21:42 she gets exceptionally huge amounts of support
00:21:45 and hatred, both.
00:21:47 That’s right.
00:21:48 Which is very difficult for a teenager to operate in.
00:21:53 In your book, Ethics in the Real World,
00:21:56 amazing book, people should check it out.
00:21:57 Very easy read.
00:21:59 82 brief essays on things that matter.
00:22:02 One of the essays asks, should robots have rights?
00:22:06 You’ve written about this,
00:22:07 so let me ask, should robots have rights?
00:22:11 If we ever develop robots capable of consciousness,
00:22:17 capable of having their own internal perspective
00:22:20 on what’s happening to them
00:22:22 so that their lives can go well or badly for them,
00:22:25 then robots should have rights.
00:22:27 Until that happens, they shouldn’t.
00:22:31 So is consciousness essentially a prerequisite to suffering?
00:22:36 So everything that possesses consciousness
00:22:41 is capable of suffering, put another way.
00:22:43 And if so, what is consciousness?
00:22:48 I certainly think that consciousness
00:22:51 is a prerequisite for suffering.
00:22:53 You can’t suffer if you’re not conscious.
00:22:58 But is it true that every being that is conscious
00:23:02 will suffer or has to be capable of suffering?
00:23:05 I suppose you could imagine a kind of consciousness,
00:23:08 especially if we can construct it artificially,
00:23:10 that’s capable of experiencing pleasure
00:23:13 but just automatically cuts out the consciousness
00:23:16 when they’re suffering.
00:23:18 So they’re like an instant anesthesia
00:23:20 as soon as something is gonna cause you suffering.
00:23:22 So that’s possible.
00:23:25 But doesn’t exist as far as we know on this planet yet.
00:23:31 You asked what is consciousness.
00:23:34 Philosophers often talk about it
00:23:36 as there being a subject of experiences.
00:23:39 So you and I and everybody listening to this
00:23:42 is a subject of experience.
00:23:44 There is a conscious subject who is taking things in,
00:23:48 responding to it in various ways,
00:23:51 feeling good about it, feeling bad about it.
00:23:54 And that’s different from the kinds
00:23:57 of artificial intelligence we have now.
00:24:00 I take out my phone.
00:24:03 I ask Google directions to where I’m going.
00:24:06 Google gives me the directions
00:24:08 and I choose to take a different way.
00:24:10 Google doesn’t care.
00:24:11 It’s not like I’m offending Google or anything like that.
00:24:14 There is no subject of experiences there.
00:24:16 And I think that’s the indication
00:24:19 that Google AI we have now is not conscious
00:24:24 or at least that level of AI is not conscious.
00:24:27 And that’s the way to think about it.
00:24:28 Now, it may be difficult to tell, of course,
00:24:31 whether a certain AI is or isn’t conscious.
00:24:34 It may mimic consciousness
00:24:35 and we can’t tell if it’s only mimicking it
00:24:37 or if it’s the real thing.
00:24:39 But that’s what we’re looking for.
00:24:40 Is there a subject of experience,
00:24:43 a perspective on the world from which things can go well
00:24:47 or badly from that perspective?
00:24:50 So our idea of what suffering looks like
00:24:54 comes from just watching ourselves when we’re in pain.
00:25:01 Or when we’re experiencing pleasure, it’s not only.
00:25:03 Pleasure and pain.
00:25:04 Yes, so and then you could actually,
00:25:07 you could push back on us, but I would say
00:25:09 that’s how we kind of build an intuition about animals
00:25:14 is we can infer the similarities between humans and animals
00:25:18 and so infer that they’re suffering or not
00:25:21 based on certain things and they’re conscious or not.
00:25:24 So what if robots, you mentioned Google Maps
00:25:31 and I’ve done this experiment.
00:25:32 So I work in robotics just for my own self
00:25:35 or I have several Roomba robots
00:25:37 and I play with different speech interaction,
00:25:40 voice based interaction.
00:25:42 And if the Roomba or the robot or Google Maps
00:25:47 shows any signs of pain, like screaming or moaning
00:25:50 or being displeased by something you’ve done,
00:25:54 that in my mind, I can’t help but immediately upgrade it.
00:25:59 And even when I myself programmed it in,
00:26:02 just having another entity that’s now for the moment
00:26:06 disjoint from me showing signs of pain
00:26:09 makes me feel like it is conscious.
00:26:11 Like I immediately, then the whatever,
00:26:15 I immediately realize that it’s not obviously,
00:26:17 but that feeling is there.
00:26:19 So sort of, I guess, what do you think about a world
00:26:26 where Google Maps and Roombas are pretending to be conscious
00:26:32 and we descendants of apes are not smart enough
00:26:35 to realize they’re not or whatever, or that is conscious,
00:26:39 they appear to be conscious.
00:26:40 And so you then have to give them rights.
00:26:44 The reason I’m asking that is that kind of capability
00:26:47 may be closer than we realize.
00:26:52 Yes, that kind of capability may be closer,
00:26:58 but I don’t think it follows
00:26:59 that we have to give them rights.
00:27:00 I suppose the argument for saying that in those circumstances
00:27:05 we should give them rights is that if we don’t,
00:27:07 we’ll harden ourselves against other beings
00:27:11 who are not robots and who really do suffer.
00:27:15 That’s a possibility that, you know,
00:27:17 if we get used to looking at a being suffering
00:27:20 and saying, yeah, we don’t have to do anything about that,
00:27:23 that being doesn’t have any rights,
00:27:25 maybe we’ll feel the same about animals, for instance.
00:27:29 And interestingly, among philosophers and thinkers
00:27:34 who denied that we have any direct duties to animals,
00:27:39 and this includes people like Thomas Aquinas
00:27:41 and Immanuel Kant, they did say, yes,
00:27:46 but still it’s better not to be cruel to them,
00:27:48 not because of the suffering we’re inflicting
00:27:50 on the animals, but because if we are,
00:27:54 we may develop a cruel disposition
00:27:56 and this will be bad for humans, you know,
00:28:00 because we’re more likely to be cruel to other humans
00:28:02 and that would be wrong.
00:28:03 So.
00:28:06 But you don’t accept that kind of.
00:28:07 I don’t accept that as the basis of the argument
00:28:10 for why we shouldn’t be cruel to animals.
00:28:11 I think the basis of the argument
00:28:12 for why we shouldn’t be cruel to animals
00:28:14 is just that we’re inflicting suffering on them
00:28:16 and the suffering is a bad thing.
00:28:19 But possibly I might accept some sort of parallel
00:28:23 of that argument as a reason why you shouldn’t be cruel
00:28:26 to these robots that mimic the symptoms of pain
00:28:30 if it’s gonna be harder for us to distinguish.
00:28:33 I would venture to say, I’d like to disagree with you
00:28:36 and with most people, I think,
00:28:39 at the risk of sounding crazy,
00:28:42 I would like to say that if that Roomba is dedicated
00:28:47 to faking the consciousness and the suffering,
00:28:50 I think it will be impossible for us.
00:28:55 I would like to apply the same argument
00:28:58 as with animals to robots,
00:29:00 that they deserve rights in that sense.
00:29:02 Now we might outlaw the addition
00:29:05 of those kinds of features into Roombas,
00:29:07 but once you do, I think I’m quite surprised
00:29:13 by the upgrade in consciousness
00:29:16 that the display of suffering creates.
00:29:20 It’s a totally open world,
00:29:22 but I’d like to just sort of the difference
00:29:25 between animals and other humans is that in the robot case,
00:29:29 we’ve added it in ourselves.
00:29:32 Therefore, we can say something about how real it is.
00:29:37 But I would like to say that the display of it
00:29:40 is what makes it real.
00:29:41 And I’m not a philosopher, I’m not making that argument,
00:29:45 but I’d at least like to add that as a possibility.
00:29:49 And I’ve been surprised by it
00:29:50 is all I’m trying to sort of articulate poorly, I suppose.
00:29:55 So there is a philosophical view
00:29:59 has been held about humans,
00:30:00 which is rather like what you’re talking about,
00:30:02 and that’s behaviorism.
00:30:04 So behaviorism was employed both in psychology,
00:30:07 people like BF Skinner was a famous behaviorist,
00:30:10 but in psychology, it was more a kind of a,
00:30:14 what is it that makes this science?
00:30:16 Well, you need to have behavior
00:30:17 because that’s what you can observe,
00:30:18 you can’t observe consciousness.
00:30:21 But in philosophy, the view just defended
00:30:23 by people like Gilbert Ryle,
00:30:24 who was a professor of philosophy at Oxford,
00:30:26 wrote a book called The Concept of Mind,
00:30:28 in which in this kind of phase,
00:30:32 this is in the 40s of linguistic philosophy,
00:30:35 he said, well, the meaning of a term is its use,
00:30:38 and we use terms like so and so is in pain
00:30:42 when we see somebody writhing or screaming
00:30:44 or trying to escape some stimulus,
00:30:47 and that’s the meaning of the term.
00:30:48 So that’s what it is to be in pain,
00:30:50 and you point to the behavior.
00:30:54 And Norman Malcolm, who was another philosopher
00:30:58 in the school from Cornell, had the view that,
00:31:02 so what is it to dream?
00:31:04 After all, we can’t see other people’s dreams.
00:31:07 Well, when people wake up and say,
00:31:10 I’ve just had a dream of, here I was,
00:31:14 undressed, walking down the main street
00:31:15 or whatever it is you’ve dreamt,
00:31:17 that’s what it is to have a dream.
00:31:19 It’s basically to wake up and recall something.
00:31:22 So you could apply this to what you’re talking about
00:31:25 and say, so what it is to be in pain
00:31:28 is to exhibit these symptoms of pain behavior,
00:31:31 and therefore, these robots are in pain.
00:31:34 That’s what the word means.
00:31:36 But nowadays, not many people think
00:31:38 that Ryle’s kind of philosophical behaviorism
00:31:40 is really very plausible,
00:31:42 so I think they would say the same about your view.
00:31:45 So, yes, I just spoke with Noam Chomsky,
00:31:48 who basically was part of dismantling
00:31:52 the behaviorist movement.
00:31:54 But, and I’m with that 100% for studying human behavior,
00:32:00 but I am one of the few people in the world
00:32:04 who has made Roombas scream in pain.
00:32:09 And I just don’t know what to do
00:32:12 with that empirical evidence,
00:32:14 because it’s hard, sort of philosophically, I agree.
00:32:19 But the only reason I philosophically agree in that case
00:32:23 is because I was the programmer.
00:32:25 But if somebody else was a programmer,
00:32:26 I’m not sure I would be able to interpret that well.
00:32:29 So I think it’s a new world
00:32:34 that I was just curious what your thoughts are.
00:32:37 For now, you feel that the display
00:32:42 of what we can kind of intellectually say
00:32:46 is a fake display of suffering is not suffering.
00:32:50 That’s right, that would be my view.
00:32:53 But that’s consistent, of course,
00:32:54 with the idea that it’s part of our nature
00:32:56 to respond to this display
00:32:58 if it’s reasonably authentically done.
00:33:02 And therefore it’s understandable
00:33:04 that people would feel this,
00:33:06 and maybe, as I said, it’s even a good thing
00:33:09 that they do feel it,
00:33:10 and you wouldn’t want to harden yourself against it
00:33:12 because then you might harden yourself
00:33:14 against being sort of really suffering.
00:33:17 But there’s this line, so you said,
00:33:20 once artificial general intelligence system,
00:33:22 a human level intelligence system become conscious,
00:33:25 I guess if I could just linger on it,
00:33:28 now I’ve wrote really dumb programs
00:33:30 that just say things that I told them to say,
00:33:33 but how do you know when a system like Alexa,
00:33:38 which is sufficiently complex
00:33:39 that you can’t introspect to how it works,
00:33:42 starts giving you signs of consciousness
00:33:46 through natural language?
00:33:48 That there’s a feeling,
00:33:49 there’s another entity there that’s self aware,
00:33:52 that has a fear of death, a mortality,
00:33:55 that has awareness of itself
00:33:57 that we kind of associate with other living creatures.
00:34:03 I guess I’m sort of trying to do the slippery slope
00:34:05 from the very naive thing where I started
00:34:07 into something where it’s sufficiently a black box
00:34:12 to where it’s starting to feel like it’s conscious.
00:34:16 Where’s that threshold
00:34:17 where you would start getting uncomfortable
00:34:20 with the idea of robot suffering, do you think?
00:34:25 I don’t know enough about the programming
00:34:27 that we’re going to this really to answer this question.
00:34:31 But I presume that somebody who does know more about this
00:34:34 could look at the program
00:34:37 and see whether we can explain the behaviors
00:34:41 in a parsimonious way that doesn’t require us
00:34:45 to suggest that some sort of consciousness has emerged.
00:34:50 Or alternatively, whether you’re in a situation
00:34:52 where you say, I don’t know how this is happening,
00:34:56 the program does generate a kind of artificial
00:35:00 general intelligence which is autonomous,
00:35:04 starts to do things itself and is autonomous
00:35:06 of the basics programming that set it up.
00:35:10 And so it’s quite possible that actually
00:35:13 we have achieved consciousness
00:35:15 in a system of artificial intelligence.
00:35:18 Sort of the approach that I work with,
00:35:20 most of the community is really excited about now
00:35:22 is with learning methods, so machine learning.
00:35:26 And the learning methods are unfortunately
00:35:27 are not capable of revealing,
00:35:31 which is why somebody like Noam Chomsky criticizes them.
00:35:34 You create powerful systems that are able
00:35:36 to do certain things without understanding
00:35:38 the theory, the physics, the science of how it works.
00:35:42 And so it’s possible if those are the kinds
00:35:44 of methods that succeed, we won’t be able
00:35:46 to know exactly, sort of try to reduce,
00:35:53 try to find whether this thing is conscious or not,
00:35:56 this thing is intelligent or not.
00:35:58 It’s simply giving, when we talk to it,
00:36:01 it displays wit and humor and cleverness
00:36:05 and emotion and fear, and then we won’t be able
00:36:10 to say where in the billions of nodes,
00:36:13 neurons in this artificial neural network
00:36:16 is the fear coming from.
00:36:20 So in that case, that’s a really interesting place
00:36:22 where we do now start to return to behaviorism and say.
00:36:28 Yeah, that is an interesting issue.
00:36:33 I would say that if we have serious doubts
00:36:36 and think it might be conscious,
00:36:39 then we ought to try to give it the benefit
00:36:41 of the doubt, just as I would say with animals.
00:36:45 I think we can be highly confident
00:36:46 that vertebrates are conscious,
00:36:50 but when we get down, and some invertebrates
00:36:53 like the octopus, but with insects,
00:36:56 it’s much harder to be confident of that.
00:37:01 I think we should give them the benefit
00:37:02 of the doubt where we can, which means,
00:37:06 I think it would be wrong to torture an insect,
00:37:09 but it doesn’t necessarily mean it’s wrong
00:37:11 to slap a mosquito that’s about to bite you
00:37:14 and stop you getting to sleep.
00:37:16 So I think you try to achieve some balance
00:37:20 in these circumstances of uncertainty.
00:37:22 If it’s okay with you, if we can go back just briefly.
00:37:26 So 44 years ago, like you mentioned, 40 plus years ago,
00:37:29 you’ve written Animal Liberation,
00:37:31 the classic book that started,
00:37:33 that launched, that was the foundation
00:37:36 of the movement of Animal Liberation.
00:37:40 Can you summarize the key set of ideas
00:37:42 that underpin that book?
00:37:44 Certainly, the key idea that underlies that book
00:37:49 is the concept of speciesism,
00:37:52 which I did not invent that term.
00:37:54 I took it from a man called Richard Rider,
00:37:56 who was in Oxford when I was,
00:37:58 and I saw a pamphlet that he’d written
00:38:00 about experiments on chimpanzees that used that term.
00:38:05 But I think I contributed
00:38:06 to making it philosophically more precise
00:38:08 and to getting it into a broader audience.
00:38:12 And the idea is that we have a bias or a prejudice
00:38:16 against taking seriously the interests of beings
00:38:20 who are not members of our species.
00:38:23 Just as in the past, Europeans, for example,
00:38:26 had a bias against taking seriously
00:38:28 the interests of Africans, racism.
00:38:31 And men have had a bias against taking seriously
00:38:34 the interests of women, sexism.
00:38:37 So I think something analogous, not completely identical,
00:38:41 but something analogous goes on
00:38:44 and has gone on for a very long time
00:38:46 with the way humans see themselves vis a vis animals.
00:38:50 We see ourselves as more important.
00:38:55 We see animals as existing to serve our needs
00:38:58 in various ways.
00:38:59 And you’re gonna find this very explicit
00:39:00 in earlier philosophers from Aristotle
00:39:04 through to Kant and others.
00:39:07 And either we don’t need to take their interests
00:39:12 into account at all,
00:39:14 or we can discount it because they’re not humans.
00:39:17 They can a little bit,
00:39:18 but they don’t count nearly as much as humans do.
00:39:22 My book argues that that attitude is responsible
00:39:25 for a lot of the things that we do to animals
00:39:29 that are wrong, confining them indoors
00:39:32 in very crowded, cramped conditions in factory farms
00:39:36 to produce meat or eggs or milk more cheaply,
00:39:39 using them in some research that’s by no means essential
00:39:44 for survival or wellbeing, and a whole lot,
00:39:48 some of the sports and things that we do to animals.
00:39:52 So I think that’s unjustified
00:39:55 because I think the significance of pain and suffering
00:40:01 does not depend on the species of the being
00:40:03 who is in pain or suffering
00:40:04 any more than it depends on the race or sex of the being
00:40:08 who is in pain or suffering.
00:40:11 And I think we ought to rethink our treatment of animals
00:40:14 along the lines of saying,
00:40:16 if the pain is just as great in an animal,
00:40:19 then it’s just as bad that it happens as if it were a human.
00:40:23 Maybe if I could ask, I apologize,
00:40:27 hopefully it’s not a ridiculous question,
00:40:29 but so as far as we know,
00:40:32 we cannot communicate with animals through natural language,
00:40:36 but we would be able to communicate with robots.
00:40:40 So I’m returning to sort of a small parallel
00:40:43 between perhaps animals and the future of AI.
00:40:46 If we do create an AGI system
00:40:48 or as we approach creating that AGI system,
00:40:53 what kind of questions would you ask her
00:40:56 to try to intuit whether there is consciousness
00:41:06 or more importantly, whether there’s capacity to suffer?
00:41:12 I might ask the AGI what she was feeling
00:41:17 or does she have feelings?
00:41:19 And if she says yes, to describe those feelings,
00:41:22 to describe what they were like,
00:41:24 to see what the phenomenal account of consciousness is like.
00:41:30 That’s one question.
00:41:33 I might also try to find out if the AGI
00:41:37 has a sense of itself.
00:41:41 So for example, the idea would you,
00:41:45 we often ask people,
00:41:46 so suppose you were in a car accident
00:41:48 and your brain were transplanted into someone else’s body,
00:41:51 do you think you would survive
00:41:53 or would it be the person whose body was still surviving,
00:41:56 your body having been destroyed?
00:41:58 And most people say, I think I would,
00:42:00 if my brain was transplanted along with my memories
00:42:02 and so on, I would survive.
00:42:04 So we could ask AGI those kinds of questions.
00:42:07 If they were transferred to a different piece of hardware,
00:42:11 would they survive?
00:42:12 What would survive?
00:42:13 And get at that sort of concept.
00:42:15 Sort of on that line, another perhaps absurd question,
00:42:19 but do you think having a body
00:42:22 is necessary for consciousness?
00:42:24 So do you think digital beings can suffer?
00:42:31 Presumably digital beings need to be
00:42:34 running on some kind of hardware, right?
00:42:36 Yeah, that ultimately boils down to,
00:42:38 but this is exactly what you just said,
00:42:40 is moving the brain from one place to another.
00:42:42 So you could move it to a different kind of hardware.
00:42:44 And I could say, look, your hardware is getting worn out.
00:42:49 We’re going to transfer you to a fresh piece of hardware.
00:42:52 So we’re gonna shut you down for a time,
00:42:55 but don’t worry, you’ll be running very soon
00:42:58 on a nice fresh piece of hardware.
00:43:00 And you could imagine this conscious AGI saying,
00:43:03 that’s fine, I don’t mind having a little rest.
00:43:05 Just make sure you don’t lose me or something like that.
00:43:08 Yeah, I mean, that’s an interesting thought
00:43:10 that even with us humans, the suffering is in the software.
00:43:14 We right now don’t know how to repair the hardware,
00:43:19 but we’re getting better at it and better in the idea.
00:43:23 I mean, some people dream about one day being able
00:43:26 to transfer certain aspects of the software
00:43:30 to another piece of hardware.
00:43:33 What do you think, just on that topic,
00:43:35 there’s been a lot of exciting innovation
00:43:39 in brain computer interfaces.
00:43:42 I don’t know if you’re familiar with the companies
00:43:43 like Neuralink, with Elon Musk,
00:43:45 communicating both ways from a computer,
00:43:48 being able to send, activate neurons
00:43:51 and being able to read spikes from neurons.
00:43:54 With the dream of being able to expand,
00:43:58 sort of increase the bandwidth at which your brain
00:44:02 can like look up articles on Wikipedia kind of thing,
00:44:05 sort of expand the knowledge capacity of the brain.
00:44:08 Do you think that notion, is that interesting to you
00:44:13 as the expansion of the human mind?
00:44:15 Yes, that’s very interesting.
00:44:17 I’d love to be able to have that increased bandwidth.
00:44:20 And I want better access to my memory, I have to say too,
00:44:23 as I get older, I talk to my wife about things
00:44:28 that we did 20 years ago or something.
00:44:30 Her memory is often better about particular events.
00:44:32 Where were we?
00:44:33 Who was at that event?
00:44:35 What did he or she wear even?
00:44:36 She may know and I have not the faintest idea about this,
00:44:39 but perhaps it’s somewhere in my memory.
00:44:40 And if I had this extended memory,
00:44:42 I could search that particular year and rerun those things.
00:44:46 I think that would be great.
00:44:49 In some sense, we already have that
00:44:51 by storing so much of our data online,
00:44:53 like pictures of different events.
00:44:54 Yes, well, Gmail is fantastic for that
00:44:56 because people email me as if they know me well
00:44:59 and I haven’t got a clue who they are,
00:45:01 but then I search for their name.
00:45:02 Ah yes, they emailed me in 2007
00:45:05 and I know who they are now.
00:45:07 Yeah, so we’re taking the first steps already.
00:45:11 So on the flip side of AI,
00:45:13 people like Stuart Russell and others
00:45:14 focus on the control problem, value alignment in AI,
00:45:19 which is the problem of making sure we build systems
00:45:21 that align to our own values, our ethics.
00:45:25 Do you think sort of high level,
00:45:28 how do we go about building systems?
00:45:31 Do you think is it possible that align with our values,
00:45:34 align with our human ethics or living being ethics?
00:45:39 Presumably, it’s possible to do that.
00:45:43 I know that a lot of people who think
00:45:46 that there’s a real danger that we won’t,
00:45:48 that we’ll more or less accidentally lose control of AGI.
00:45:51 Do you have that fear yourself personally?
00:45:56 I’m not quite sure what to think.
00:45:58 I talk to philosophers like Nick Bostrom and Toby Ord
00:46:01 and they think that this is a real problem
00:46:05 we need to worry about.
00:46:07 Then I talk to people who work for Microsoft
00:46:11 or DeepMind or somebody and they say,
00:46:13 no, we’re not really that close to producing AGI,
00:46:18 super intelligence.
00:46:19 So if you look at Nick Bostrom,
00:46:21 sort of the arguments, it’s very hard to defend.
00:46:25 So I’m of course, I am a self engineer AI system,
00:46:28 so I’m more with the DeepMind folks
00:46:29 where it seems that we’re really far away,
00:46:32 but then the counter argument is,
00:46:34 is there any fundamental reason that we’ll never achieve it?
00:46:39 And if not, then eventually there’ll be
00:46:42 a dire existential risk.
00:46:44 So we should be concerned about it.
00:46:46 And do you find that argument at all appealing
00:46:50 in this domain or any domain that eventually
00:46:53 this will be a problem so we should be worried about it?
00:46:56 Yes, I think it’s a problem.
00:46:58 I think that’s a valid point.
00:47:03 Of course, when you say eventually,
00:47:08 that raises the question, how far off is that?
00:47:11 And is there something that we can do about it now?
00:47:13 Because if we’re talking about
00:47:15 this is gonna be 100 years in the future
00:47:17 and you consider how rapidly our knowledge
00:47:20 of artificial intelligence has grown
00:47:22 in the last 10 or 20 years,
00:47:24 it seems unlikely that there’s anything much
00:47:26 we could do now that would influence
00:47:29 whether this is going to happen 100 years in the future.
00:47:33 People in 80 years in the future
00:47:35 would be in a much better position to say,
00:47:37 this is what we need to do to prevent this happening
00:47:39 than we are now.
00:47:41 So to some extent I find that reassuring,
00:47:44 but I’m all in favor of some people doing research
00:47:48 into this to see if indeed it is that far off
00:47:51 or if we are in a position to do something about it sooner.
00:47:55 I’m very much of the view that extinction
00:47:58 is a terrible thing and therefore,
00:48:02 even if the risk of extinction is very small,
00:48:05 if we can reduce that risk,
00:48:09 that’s something that we ought to do.
00:48:11 My disagreement with some of these people
00:48:12 who talk about longterm risks, extinction risks,
00:48:16 is only about how much priority that should have
00:48:18 as compared to present questions.
00:48:20 So essentially, if you look at the math of it
00:48:22 from a utilitarian perspective,
00:48:25 if it’s existential risk, so everybody dies,
00:48:28 that it feels like an infinity in the math equation,
00:48:33 that that makes the math
00:48:36 with the priorities difficult to do.
00:48:39 That if we don’t know the time scale
00:48:42 and you can legitimately argue
00:48:43 that it’s nonzero probability that it’ll happen tomorrow,
00:48:48 that how do you deal with these kinds of existential risks
00:48:52 like from nuclear war, from nuclear weapons,
00:48:55 from biological weapons, from,
00:48:58 I’m not sure if global warming falls into that category
00:49:01 because global warming is a lot more gradual.
00:49:04 And people say it’s not an existential risk
00:49:06 because there’ll always be possibilities
00:49:08 of some humans existing, farming Antarctica
00:49:11 or northern Siberia or something of that sort, yeah.
00:49:14 But you don’t find the complete existential risks
00:49:18 as a fundamental, like an overriding part
00:49:23 of the equations of ethics, of what we should do.
00:49:26 You know, certainly if you treat it as an infinity,
00:49:29 then it plays havoc with any calculations.
00:49:32 But arguably, we shouldn’t.
00:49:34 I mean, one of the ethical assumptions that goes into this
00:49:37 is that the loss of future lives,
00:49:40 that is of merely possible lives of beings
00:49:43 who may never exist at all,
00:49:44 is in some way comparable to the sufferings or deaths
00:49:51 of people who do exist at some point.
00:49:54 And that’s not clear to me.
00:49:57 I think there’s a case for saying that,
00:49:59 but I also think there’s a case for taking the other view.
00:50:01 So that has some impact on it.
00:50:04 Of course, you might say, ah, yes,
00:50:05 but still, if there’s some uncertainty about this
00:50:08 and the costs of extinction are infinite,
00:50:12 then still, it’s gonna overwhelm everything else.
00:50:16 But I suppose I’m not convinced of that.
00:50:20 I’m not convinced that it’s really infinite here.
00:50:23 And even Nick Bostrom, in his discussion of this,
00:50:27 doesn’t claim that there’ll be
00:50:28 an infinite number of lives lived.
00:50:31 What is it, 10 to the 56th or something?
00:50:33 It’s a vast number that I think he calculates.
00:50:36 This is assuming we can upload consciousness
00:50:38 onto these digital forms,
00:50:43 and therefore, they’ll be much more energy efficient,
00:50:45 but he calculates the amount of energy in the universe
00:50:47 or something like that.
00:50:48 So the numbers are vast but not infinite,
00:50:50 which gives you some prospect maybe
00:50:52 of resisting some of the argument.
00:50:55 The beautiful thing with Nick’s arguments
00:50:57 is he quickly jumps from the individual scale
00:50:59 to the universal scale,
00:51:01 which is just awe inspiring to think of
00:51:04 when you think about the entirety
00:51:06 of the span of time of the universe.
00:51:08 It’s both interesting from a computer science perspective,
00:51:11 AI perspective, and from an ethical perspective,
00:51:13 the idea of utilitarianism.
00:51:16 Could you say what is utilitarianism?
00:51:19 Utilitarianism is the ethical view
00:51:22 that the right thing to do is the act
00:51:25 that has the greatest expected utility,
00:51:28 where what that means is it’s the act
00:51:32 that will produce the best consequences,
00:51:34 discounted by the odds that you won’t be able
00:51:37 to produce those consequences,
00:51:38 that something will go wrong.
00:51:40 But in simple case, let’s assume we have certainty
00:51:43 about what the consequences of our actions will be,
00:51:46 then the right action is the action
00:51:47 that will produce the best consequences.
00:51:50 Is that always, and by the way,
00:51:52 there’s a bunch of nuanced stuff
00:51:53 that you talk with Sam Harris on this podcast
00:51:56 on that people should go listen to.
00:51:57 It’s great.
00:51:58 That’s like two hours of moral philosophy discussion.
00:52:02 But is that an easy calculation?
00:52:05 No, it’s a difficult calculation.
00:52:07 And actually, there’s one thing that I need to add,
00:52:10 and that is utilitarians, certainly the classical
00:52:14 utilitarians, think that by best consequences,
00:52:16 we’re talking about happiness
00:52:18 and the absence of pain and suffering.
00:52:21 There are other consequentialists
00:52:22 who are not really utilitarians who say
00:52:27 there are different things that could be good consequences.
00:52:29 Justice, freedom, human dignity,
00:52:32 knowledge, they all count as good consequences too.
00:52:35 And that makes the calculations even more difficult
00:52:38 because then you need to know
00:52:38 how to balance these things off.
00:52:40 If you are just talking about wellbeing,
00:52:44 using that term to express happiness
00:52:46 and the absence of suffering,
00:52:49 I think the calculation becomes more manageable
00:52:54 in a philosophical sense.
00:52:56 It’s still in practice.
00:52:58 We don’t know how to do it.
00:52:59 We don’t know how to measure quantities
00:53:01 of happiness and misery.
00:53:02 We don’t know how to calculate the probabilities
00:53:04 that different actions will produce, this or that.
00:53:08 So at best, we can use it as a rough guide
00:53:13 to different actions and one where we have to focus
00:53:16 on the short term consequences
00:53:20 because we just can’t really predict
00:53:22 all of the longer term ramifications.
00:53:25 So what about the extreme suffering of very small groups?
00:53:33 Utilitarianism is focused on the overall aggregate, right?
00:53:38 Would you say you yourself are a utilitarian?
00:53:41 Yes, I’m a utilitarian.
00:53:45 What do you make of the difficult, ethical,
00:53:50 maybe poetic suffering of very few individuals?
00:53:54 I think it’s possible that that gets overridden
00:53:57 by benefits to very large numbers of individuals.
00:54:00 I think that can be the right answer.
00:54:02 But before we conclude that it is the right answer,
00:54:05 we have to know how severe the suffering is
00:54:08 and how that compares with the benefits.
00:54:12 So I tend to think that extreme suffering is worse than
00:54:19 or is further, if you like, below the neutral level
00:54:23 than extreme happiness or bliss is above it.
00:54:27 So when I think about the worst experiences possible
00:54:30 and the best experiences possible,
00:54:33 I don’t think of them as equidistant from neutral.
00:54:36 So like it’s a scale that goes from minus 100 through zero
00:54:39 as a neutral level to plus 100.
00:54:43 Because I know that I would not exchange an hour
00:54:46 of my most pleasurable experiences
00:54:49 for an hour of my most painful experiences,
00:54:52 even I wouldn’t have an hour
00:54:54 of my most painful experiences even for two hours
00:54:57 or 10 hours of my most painful experiences.
00:55:01 Did I say that correctly?
00:55:02 Yeah, yeah, yeah, yeah.
00:55:03 Maybe 20 hours then, it’s 21, what’s the exchange rate?
00:55:07 So that’s the question, what is the exchange rate?
00:55:08 But I think it can be quite high.
00:55:10 So that’s why you shouldn’t just assume that
00:55:15 it’s okay to make one person suffer extremely
00:55:18 in order to make two people much better off.
00:55:21 It might be a much larger number.
00:55:23 But at some point I do think you should aggregate
00:55:27 and the result will be,
00:55:30 even though it violates our intuitions of justice
00:55:33 and fairness, whatever it might be,
00:55:36 giving priority to those who are worse off,
00:55:39 at some point I still think
00:55:41 that will be the right thing to do.
00:55:43 Yeah, it’s some complicated nonlinear function.
00:55:46 Can I ask a sort of out there question is,
00:55:49 the more and more we put our data out there,
00:55:51 the more we’re able to measure a bunch of factors
00:55:53 of each of our individual human lives.
00:55:55 And I could foresee the ability to estimate wellbeing
00:55:59 of whatever we together collectively agree
00:56:03 and is in a good objective function
00:56:05 from a utilitarian perspective.
00:56:07 Do you think it’ll be possible
00:56:11 and is a good idea to push that kind of analysis
00:56:15 to make then public decisions perhaps with the help of AI
00:56:19 that here’s a tax rate,
00:56:24 here’s a tax rate at which wellbeing will be optimized.
00:56:28 Yeah, that would be great if we really knew that,
00:56:31 if we really could calculate that.
00:56:32 No, but do you think it’s possible
00:56:33 to converge towards an agreement amongst humans,
00:56:36 towards an objective function
00:56:39 or is it just a hopeless pursuit?
00:56:42 I don’t think it’s hopeless.
00:56:43 I think it would be difficult
00:56:44 to get converged towards agreement, at least at present,
00:56:47 because some people would say,
00:56:49 I’ve got different views about justice
00:56:52 and I think you ought to give priority
00:56:54 to those who are worse off,
00:56:55 even though I acknowledge that the gains
00:56:58 that the worst off are making are less than the gains
00:57:01 that those who are sort of medium badly off could be making.
00:57:05 So we still have all of these intuitions that we argue about.
00:57:10 So I don’t think we would get agreement,
00:57:11 but the fact that we wouldn’t get agreement
00:57:14 doesn’t show that there isn’t a right answer there.
00:57:17 Do you think, who gets to say what is right and wrong?
00:57:21 Do you think there’s place for ethics oversight
00:57:23 from the government?
00:57:26 So I’m thinking in the case of AI,
00:57:29 overseeing what kind of decisions AI can make or not,
00:57:33 but also if you look at animal rights
00:57:36 or rather not rights or perhaps rights,
00:57:39 but the ideas you’ve explored in animal liberation,
00:57:43 who gets to, so you eloquently and beautifully write
00:57:46 in your book that this, you know, we shouldn’t do this,
00:57:50 but is there some harder rules that should be imposed
00:57:53 or is this a collective thing we converse towards the society
00:57:56 and thereby make the better and better ethical decisions?
00:58:02 Politically, I’m still a Democrat
00:58:04 despite looking at the flaws in democracy
00:58:07 and the way it doesn’t work always very well.
00:58:10 So I don’t see a better option
00:58:11 than allowing the public to vote for governments
00:58:18 in accordance with their policies.
00:58:20 And I hope that they will vote for policies
00:58:24 that reduce the suffering of animals
00:58:27 and reduce the suffering of distant humans,
00:58:30 whether geographically distant or distant
00:58:32 because they’re future humans.
00:58:35 But I recognise that democracy
00:58:36 isn’t really well set up to do that.
00:58:38 And in a sense, you could imagine a wise and benevolent,
00:58:45 you know, omnibenevolent leader
00:58:48 who would do that better than democracies could.
00:58:51 But in the world in which we live,
00:58:54 it’s difficult to imagine that this leader
00:58:57 isn’t gonna be corrupted by a variety of influences.
00:59:01 You know, we’ve had so many examples
00:59:04 of people who’ve taken power with good intentions
00:59:08 and then have ended up being corrupt
00:59:10 and favouring themselves.
00:59:12 So I don’t know, you know, that’s why, as I say,
00:59:16 I don’t know that we have a better system
00:59:17 than democracy to make these decisions.
00:59:20 Well, so you also discuss effective altruism,
00:59:23 which is a mechanism for going around government
00:59:27 for putting the power in the hands of the people
00:59:29 to donate money towards causes to help, you know,
00:59:32 remove the middleman and give it directly
00:59:37 to the causes that they care about.
00:59:41 Sort of, maybe this is a good time to ask,
00:59:45 you’ve, 10 years ago, wrote The Life You Can Save,
00:59:48 that’s now, I think, available for free online?
00:59:51 That’s right, you can download either the ebook
00:59:53 or the audiobook free from the lifeyoucansave.org.
00:59:58 And what are the key ideas that you present
01:00:01 in the book?
01:00:03 The main thing I wanna do in the book
01:00:05 is to make people realise that it’s not difficult
01:00:10 to help people in extreme poverty,
01:00:13 that there are highly effective organisations now
01:00:16 that are doing this, that they’ve been independently assessed
01:00:20 and verified by research teams that are expert in this area
01:00:25 and that it’s a fulfilling thing to do
01:00:28 to, for at least part of your life, you know,
01:00:30 we can’t all be saints, but at least one of your goals
01:00:33 should be to really make a positive contribution
01:00:36 to the world and to do something to help people
01:00:38 who through no fault of their own
01:00:40 are in very dire circumstances and living a life
01:00:45 that is barely or perhaps not at all
01:00:49 a decent life for a human being to live.
01:00:51 So you describe a minimum ethical standard of giving.
01:00:56 What advice would you give to people
01:01:01 that want to be effectively altruistic in their life,
01:01:06 like live an effective altruism life?
01:01:09 There are many different kinds of ways of living
01:01:12 as an effective altruist.
01:01:14 And if you’re at the point where you’re thinking
01:01:16 about your long term career, I’d recommend you take a look
01:01:20 at a website called 80,000Hours, 80,000Hours.org,
01:01:24 which looks at ethical career choices.
01:01:27 And they range from, for example,
01:01:29 going to work on Wall Street
01:01:31 so that you can earn a huge amount of money
01:01:33 and then donate most of it to effective charities
01:01:36 to going to work for a really good nonprofit organization
01:01:40 so that you can directly use your skills and ability
01:01:44 and hard work to further a good cause,
01:01:48 or perhaps going into politics, maybe small chances,
01:01:52 but big payoffs in politics,
01:01:55 go to work in the public service
01:01:56 where if you’re talented, you might rise to a high level
01:01:59 where you can influence decisions,
01:02:01 do research in an area where the payoffs could be great.
01:02:05 There are a lot of different opportunities,
01:02:07 but too few people are even thinking about those questions.
01:02:11 They’re just going along in some sort of preordained rut
01:02:14 to particular careers.
01:02:15 Maybe they think they’ll earn a lot of money
01:02:17 and have a comfortable life,
01:02:19 but they may not find that as fulfilling
01:02:20 as actually knowing that they’re making
01:02:23 a positive difference to the world.
01:02:25 What about in terms of,
01:02:27 so that’s like long term, 80,000 hours,
01:02:30 sort of shorter term giving part of,
01:02:33 well, actually it’s a part of that.
01:02:34 You go to work at Wall Street,
01:02:37 if you would like to give a percentage of your income
01:02:40 that you talk about and life you can save that.
01:02:42 I mean, I was looking through, it’s quite a compelling,
01:02:48 I mean, I’m just a dumb engineer,
01:02:50 so I like, there’s simple rules, there’s a nice percentage.
01:02:53 Okay, so I do actually set out suggested levels of giving
01:02:57 because people often ask me about this.
01:03:00 A popular answer is give 10%, the traditional tithe
01:03:04 that’s recommended in Christianity and also Judaism.
01:03:08 But why should it be the same percentage
01:03:11 irrespective of your income?
01:03:13 Tax scales reflect the idea that the more income you have,
01:03:16 the more you can pay tax.
01:03:18 And I think the same is true in what you can give.
01:03:20 So I do set out a progressive donor scale,
01:03:25 which starts out at 1% for people on modest incomes
01:03:28 and rises to 33 and a third percent
01:03:31 for people who are really earning a lot.
01:03:34 And my idea is that I don’t think any of these amounts
01:03:38 really impose real hardship on people
01:03:42 because they are progressive and geared to income.
01:03:45 So I think anybody can do this
01:03:48 and can know that they’re doing something significant
01:03:51 to play their part in reducing the huge gap
01:03:56 between people in extreme poverty in the world
01:03:58 and people living affluent lives.
01:04:02 And aside from it being an ethical life,
01:04:05 it’s one that you find more fulfilling
01:04:07 because there’s something about our human nature that,
01:04:11 or some of our human natures,
01:04:13 maybe most of our human nature that enjoys doing
01:04:18 the ethical thing.
01:04:21 Yes, I make both those arguments,
01:04:23 that it is an ethical requirement
01:04:25 in the kind of world we live in today
01:04:27 to help people in great need when we can easily do so,
01:04:30 but also that it is a rewarding thing
01:04:33 and there’s good psychological research showing
01:04:35 that people who give more tend to be more satisfied
01:04:39 with their lives.
01:04:40 And I think this has something to do
01:04:41 with having a purpose that’s larger than yourself
01:04:44 and therefore never being, if you like,
01:04:49 never being bored sitting around,
01:04:51 oh, you know, what will I do next?
01:04:52 I’ve got nothing to do.
01:04:54 In a world like this, there are many good things
01:04:56 that you can do and enjoy doing them.
01:04:59 Plus you’re working with other people
01:05:02 in the effective altruism movement
01:05:03 who are forming a community of other people
01:05:06 with similar ideas and they tend to be interesting,
01:05:09 thoughtful and good people as well.
01:05:11 And having friends of that sort is another big contribution
01:05:14 to having a good life.
01:05:16 So we talked about big things that are beyond ourselves,
01:05:20 but we’re also just human and mortal.
01:05:24 Do you ponder your own mortality?
01:05:27 Is there insights about your philosophy,
01:05:29 the ethics that you gain from pondering your own mortality?
01:05:35 Clearly, you know, as you get into your 70s,
01:05:37 you can’t help thinking about your own mortality.
01:05:40 Uh, but I don’t know that I have great insights
01:05:44 into that from my philosophy.
01:05:47 I don’t think there’s anything after the death of my body,
01:05:50 you know, assuming that we won’t be able to upload my mind
01:05:53 into anything at the time when I die.
01:05:56 So I don’t think there’s any afterlife
01:05:58 or anything to look forward to in that sense.
01:06:00 Do you fear death?
01:06:01 So if you look at Ernest Becker
01:06:04 and describing the motivating aspects
01:06:08 of our ability to be cognizant of our mortality,
01:06:14 do you have any of those elements
01:06:17 in your drive and your motivation in life?
01:06:21 I suppose the fact that you have only a limited time
01:06:23 to achieve the things that you want to achieve
01:06:25 gives you some sort of motivation
01:06:27 to get going and achieving them.
01:06:29 And if we thought we were immortal,
01:06:31 we might say, ah, you know,
01:06:32 I can put that off for another decade or two.
01:06:36 So there’s that about it.
01:06:37 But otherwise, you know, no,
01:06:40 I’d rather have more time to do more.
01:06:42 I’d also like to be able to see how things go
01:06:45 that I’m interested in, you know.
01:06:47 Is climate change gonna turn out to be as dire
01:06:49 as a lot of scientists say that it is going to be?
01:06:53 Will we somehow scrape through
01:06:55 with less damage than we thought?
01:06:57 I’d really like to know the answers to those questions,
01:06:59 but I guess I’m not going to.
01:07:02 Well, you said there’s nothing afterwards.
01:07:05 So let me ask the even more absurd question.
01:07:08 What do you think is the meaning of it all?
01:07:11 I think the meaning of life is the meaning we give to it.
01:07:14 I don’t think that we were brought into the universe
01:07:18 for any kind of larger purpose.
01:07:21 But given that we exist,
01:07:24 I think we can recognize that some things
01:07:26 are objectively bad.
01:07:30 Extreme suffering is an example,
01:07:32 and other things are objectively good,
01:07:35 like having a rich, fulfilling, enjoyable,
01:07:38 pleasurable life, and we can try to do our part
01:07:42 in reducing the bad things and increasing the good things.
01:07:47 So one way, the meaning is to do a little bit more
01:07:50 of the good things, objectively good things,
01:07:52 and a little bit less of the bad things.
01:07:55 Yes, so do as much of the good things as you can
01:07:58 and as little of the bad things.
01:08:00 You beautifully put, I don’t think there’s a better place
01:08:03 to end it, thank you so much for talking today.
01:08:04 Thanks very much, Lex.
01:08:05 It’s been really interesting talking to you.
01:08:08 Thanks for listening to this conversation
01:08:10 with Peter Singer, and thank you to our sponsors,
01:08:13 Cash App and Masterclass.
01:08:15 Please consider supporting the podcast
01:08:17 by downloading Cash App and using the code LexPodcast,
01:08:21 and signing up at masterclass.com slash Lex.
01:08:26 Click the links, buy all the stuff.
01:08:28 It’s the best way to support this podcast
01:08:30 and the journey I’m on in my research and startup.
01:08:35 If you enjoy this thing, subscribe on YouTube,
01:08:38 review it with 5,000 Apple Podcast, support on Patreon,
01:08:41 or connect with me on Twitter at Lex Friedman,
01:08:43 spelled without the E, just F R I D M A N.
01:08:48 And now, let me leave you with some words
01:08:50 from Peter Singer, what one generation finds ridiculous,
01:08:54 the next accepts, and the third shudders
01:08:59 when looks back at what the first did.
01:09:01 Thank you for listening, and hope to see you next time.