Transcript
00:00:00 The following is a conversation with Michael Stevens,
00:00:02 the creator of Vsauce,
00:00:04 one of the most popular educational YouTube channels
00:00:07 in the world with over 15 million subscribers
00:00:10 and over 1.7 billion views.
00:00:13 His videos often ask and answer questions
00:00:16 that are both profound and entertaining,
00:00:18 spanning topics from physics to psychology.
00:00:21 Popular questions include,
00:00:23 what if everyone jumped at once?
00:00:25 Or what if the sun disappeared?
00:00:27 Or why are things creepy?
00:00:29 Or what if the earth stopped spinning?
00:00:32 As part of his channel,
00:00:34 he created three seasons of Mind Field,
00:00:36 a series that explored human behavior.
00:00:38 His curiosity and passion are contagious
00:00:41 and inspiring to millions of people.
00:00:44 And so as an educator,
00:00:45 his impact and contribution to the world
00:00:47 is truly immeasurable.
00:00:49 This is the Artificial Intelligence Podcast.
00:00:52 If you enjoy it, subscribe on YouTube,
00:00:55 give it five stars on Apple Podcast,
00:00:57 support it on Patreon,
00:00:58 or simply connect with me on Twitter,
00:01:00 at Lex Friedman, spelled F R I D M A N.
00:01:04 I recently started doing ads
00:01:06 at the end of the introduction.
00:01:08 I’ll do one or two minutes after introducing the episode
00:01:10 and never any ads in the middle
00:01:12 that break the flow of the conversation.
00:01:14 I hope that works for you
00:01:16 and doesn’t hurt the listening experience.
00:01:19 This show is presented by Cash App,
00:01:21 the number one finance app in the App Store.
00:01:24 I personally use Cash App to send money to friends,
00:01:26 but you can also use it to buy, sell,
00:01:28 and deposit Bitcoin in just seconds.
00:01:30 Cash App also has a new investing feature.
00:01:33 You can buy fractions of a stock, say $1 worth,
00:01:36 no matter what the stock price is.
00:01:38 Broker services are provided by Cash App Investing,
00:01:40 a subsidiary of Square and member SIPC.
00:01:44 I’m excited to be working with Cash App
00:01:46 to support one of my favorite organizations called First,
00:01:49 best known for their first robotics and Lego competitions.
00:01:52 They educate and inspire hundreds of thousands of students
00:01:56 in over 110 countries
00:01:57 and have a perfect rating on Charity Navigator,
00:02:00 which means the donated money
00:02:01 is used to maximum effectiveness.
00:02:04 When you get Cash App from the App Store, Google Play,
00:02:07 and use code LEXPODCAST, you’ll get $10,
00:02:11 and Cash App will also donate $10 to First,
00:02:13 which again is an organization
00:02:15 that I’ve personally seen inspire girls and boys
00:02:18 to dream of engineering a better world.
00:02:21 And now here’s my conversation with Michael Stevens.
00:02:25 One of your deeper interests is psychology,
00:02:30 understanding human behavior.
00:02:32 You’ve pointed out how messy studying human behavior is
00:02:35 and that it’s far from the scientific rigor
00:02:37 of something like physics, for example.
00:02:40 How do you think we can take psychology
00:02:43 from where it’s been in the 20th century
00:02:45 to something more like what the physicists,
00:02:49 theoretical physicists are doing,
00:02:50 something precise, something rigorous?
00:02:52 Well, we could do it by finding
00:02:57 the physical foundations of psychology, right?
00:03:01 If all of our emotions and moods and feelings and behaviors
00:03:05 are the result of mechanical behaviors of atoms
00:03:11 and molecules in our brains,
00:03:12 then can we find correlations?
00:03:15 Perhaps like chaos makes that really difficult
00:03:17 and the uncertainty principle and all these things.
00:03:19 That we can’t know the position and velocity
00:03:22 of every single quantum state in a brain, probably.
00:03:27 But I think that if we can get to that point with psychology,
00:03:33 then we can start to think about consciousness
00:03:37 in a physical and mathematical way.
00:03:40 When we ask questions like, well, what is self reference?
00:03:44 How can you think about yourself thinking?
00:03:47 What are some mathematical structures
00:03:49 that could bring that about?
00:03:52 There’s ideas of, in terms of consciousness
00:03:55 and breaking it down into physics,
00:03:59 there’s ideas of panpsychism where people believe
00:04:02 that whatever consciousness is,
00:04:04 is a fundamental part of reality.
00:04:07 It’s almost like a physics law.
00:04:08 Do you think, what’s your views on consciousness?
00:04:11 Do you think it has this deep part of reality
00:04:15 or is it something that’s deeply human
00:04:17 and constructed by us humans?
00:04:21 Starting nice and light and easy.
00:04:25 Nothing I ask you today has actually proven answer.
00:04:28 So we’re just hypothesizing.
00:04:29 So yeah, I mean, I should clarify, this is all speculation
00:04:32 and I’m not an expert in any of these topics
00:04:35 and I’m not God, but I think that consciousness
00:04:39 is probably something that can be fully explained
00:04:44 within the laws of physics.
00:04:48 I think that our bodies and brains and the universe
00:04:51 and at the quantum level is so rich and complex.
00:04:56 I’d be surprised if we couldn’t find a room
00:04:58 for consciousness there.
00:05:00 And why should we be conscious?
00:05:04 Why are we aware of ourselves?
00:05:06 That is a very strange and interesting
00:05:10 and important question.
00:05:11 And I think for the next few thousand years,
00:05:15 we’re going to have to believe in answers purely on faith.
00:05:20 But my guess is that we will find that,
00:05:25 within the configuration space
00:05:27 of possible arrangements of the universe,
00:05:29 there are some that contain memories of others.
00:05:34 Literally, Julian Barber calls them time capsule states
00:05:38 where you’re like, yeah, not only do I have a scratch
00:05:40 on my arm, but also this state of the universe
00:05:43 also contains a memory in my head
00:05:45 of being scratched by my cat three days ago.
00:05:48 And for some reason, those kinds of states of the universe
00:05:52 are more plentiful or more likely.
00:05:55 When you say those states,
00:05:57 the ones that contain memories of its past
00:06:00 or ones that contain memories of its past
00:06:02 and have degrees of consciousness.
00:06:05 Just the first part, because I think the consciousness
00:06:08 then emerges from the fact that a state of the universe
00:06:13 that contains fragments or memories of other states
00:06:19 is one where you’re going to feel like there’s time.
00:06:22 You’re going to feel like, yeah,
00:06:24 things happened in the past.
00:06:26 And I don’t know what’ll happen in the future
00:06:27 because these states don’t contain information
00:06:29 about the future.
00:06:30 For some reason, those kinds of states
00:06:34 are either more common, more plentiful,
00:06:38 or you could use the anthropic principle and just say,
00:06:40 well, they’re extremely rare,
00:06:42 but until you are in one, or if you are in one,
00:06:45 then you can ask questions,
00:06:46 like you’re asking me on this podcast.
00:06:49 Why questions?
00:06:50 Yeah, it’s like, why are we conscious?
00:06:52 Well, because if we weren’t,
00:06:53 we wouldn’t be asking why we were.
00:06:56 You’ve kind of implied that you have a sense,
00:06:59 again, hypothesis, theorizing
00:07:02 that the universe is deterministic.
00:07:05 What’s your thoughts about free will?
00:07:08 Do you think of the universe as deterministic
00:07:10 in a sense that it’s unrolling a particular,
00:07:14 like there’s a,
00:07:14 it’s operating under a specific set of physical laws.
00:07:17 And when you have to set the initial conditions,
00:07:21 it will unroll in the exact same way
00:07:23 in our particular line of the universe every time.
00:07:28 That is a very useful way to think about the universe.
00:07:31 It’s done us well.
00:07:32 It’s brought us to the moon.
00:07:33 It’s brought us to where we are today, right?
00:07:35 I would not say that I believe in determinism
00:07:40 in that kind of an absolute form,
00:07:43 or actually I just don’t care.
00:07:45 Maybe it’s true,
00:07:46 but I’m not gonna live my life like it is.
00:07:49 What in your sense,
00:07:50 cause you’ve studied kind of how we humans
00:07:54 think of the world.
00:07:55 What’s in your view is the difference between our perception,
00:07:59 like how we think the world is and reality.
00:08:02 Do you think there’s a huge gap there?
00:08:04 Like we delude ourselves that the whole thing is an illusion.
00:08:07 Just everything about human psychology,
00:08:09 the way we see things and how things actually are.
00:08:12 All the things you’ve studied, what’s your sense?
00:08:14 How big is the gap between reality and perception?
00:08:16 Well, again, purely speculative.
00:08:18 I think that we will never know the answer.
00:08:20 We cannot know the answer.
00:08:22 There is no experiment to find an answer to that question.
00:08:26 Everything we experience is an event in our brain.
00:08:30 When I look at a cat, I’m not even,
00:08:32 I can’t prove that there’s a cat there.
00:08:36 All I am experiencing is the perception of a cat
00:08:40 inside my own brain.
00:08:43 I am only a witness to the events of my mind.
00:08:46 I think it is very useful to infer that
00:08:50 if I witness the event of cat in my head,
00:08:54 it’s because I’m looking at a cat that is literally there
00:08:57 and it has its own feelings and motivations
00:08:59 and should be pet and given food and water and love.
00:09:03 I think that’s the way you should live your life.
00:09:05 But whether or not we live in a simulation,
00:09:09 I’m a brain in a vat, I don’t know.
00:09:13 Do you care?
00:09:14 I don’t really.
00:09:16 Well, I care because it’s a fascinating question.
00:09:19 And it’s a fantastic way to get people excited about
00:09:23 all kinds of topics, physics, psychology,
00:09:26 consciousness, philosophy.
00:09:28 But at the end of the day, what would the difference be?
00:09:31 If you…
00:09:31 The cat needs to be fed at the end of the day,
00:09:33 otherwise it’ll be a dead cat.
00:09:35 Right, but if it’s not even a real cat,
00:09:38 then it’s just like a video game cat.
00:09:40 And right, so what’s the difference between killing
00:09:43 a digital cat in a video game because of neglect
00:09:46 versus a real cat?
00:09:48 It seems very different to us psychologically.
00:09:50 Like I don’t really feel bad about, oh my gosh,
00:09:52 I forgot to feed my Tamagotchi, right?
00:09:54 But I would feel terrible
00:09:55 if I forgot to feed my actual cats.
00:09:58 So can you just touch on the topic of simulation?
00:10:03 Do you find this thought experiment that we’re living
00:10:06 in a simulation useful, inspiring or constructive
00:10:10 in any kind of way?
00:10:11 Do you think it’s ridiculous?
00:10:12 Do you think it could be true?
00:10:14 Or is it just a useful thought experiment?
00:10:17 I think it is extremely useful as a thought experiment
00:10:20 because it makes sense to everyone,
00:10:24 especially as we see virtual reality
00:10:27 and computer games getting more and more complex.
00:10:30 You’re not talking to an audience in like Newton’s time
00:10:33 where you’re like, imagine a clock
00:10:36 that it has mechanics in it that are so complex
00:10:38 that it can create love.
00:10:40 And everyone’s like, no.
00:10:42 But today you really start to feel, man,
00:10:46 at what point is this little robot friend of mine
00:10:48 gonna be like someone I don’t want to cancel plans with?
00:10:53 And so it’s a great, the thought experiment
00:10:59 of do we live in a simulation?
00:11:00 Am I a brain in a vat that is just being given
00:11:03 electrical impulses from some nefarious other beings
00:11:08 so that I believe that I live on earth
00:11:11 and that I have a body and all of this?
00:11:13 And the fact that you can’t prove it either way
00:11:15 is a fantastic way to introduce people
00:11:17 to some of the deepest questions.
00:11:20 So you mentioned a little buddy
00:11:23 that you would want to cancel an appointment with.
00:11:25 So that’s a lot of our conversations.
00:11:27 That’s what my research is, is artificial intelligence.
00:11:32 And I apologize, but you’re such a fun person
00:11:34 to ask these big questions with.
00:11:36 Well, I hope I can give some answers that are interesting.
00:11:40 Well, because of you’ve sharpened your brain’s ability
00:11:45 to explore some of the most, some of the questions
00:11:47 that many scientists are actually afraid of even touching,
00:11:51 which is fascinating.
00:11:52 I think you’re in that sense ultimately a great scientist
00:11:56 through this process of sharpening your brain.
00:11:58 Well, I don’t know if I am a scientist.
00:12:01 I think science is a way of knowing
00:12:04 and there are a lot of questions I investigate
00:12:09 that are not scientific questions.
00:12:11 On like mind field, we have definitely done
00:12:14 scientific experiments and studies that had hypotheses
00:12:17 and all of that, but not to be too like precious
00:12:22 about what does the word science mean?
00:12:24 But I think I would just describe myself as curious
00:12:27 and I hope that that curiosity is contagious.
00:12:29 So to you, the scientific method
00:12:31 is deeply connected to science
00:12:33 because your curiosity took you to asking questions.
00:12:38 To me, asking a good question, even if you feel,
00:12:43 society feels that it’s not a question
00:12:45 within the reach of science currently.
00:12:47 To me, asking the question is the biggest step
00:12:51 of the scientific process.
00:12:53 The scientific method is the second part
00:12:57 and that may be what traditionally is called science,
00:12:59 but to me, asking the questions,
00:13:00 being brave enough to ask the questions,
00:13:03 being curious and not constrained
00:13:05 by what you’re supposed to think is just true,
00:13:09 what it means to be a scientist to me.
00:13:11 It’s certainly a huge part of what it means to be a human.
00:13:16 If I were to say, you know what?
00:13:17 I don’t believe in forces.
00:13:19 I think that when I push on a massive object,
00:13:22 a ghost leaves my body and enters the object I’m pushing
00:13:25 and these ghosts happen to just get really lazy
00:13:28 when they’re around massive things
00:13:29 and that’s why F equals MA.
00:13:32 Oh, and by the way, the laziness of the ghost
00:13:34 is in proportion to the mass of the object.
00:13:36 So boom, prove me wrong.
00:13:37 Every experiment, well, you can never find the ghost.
00:13:41 And so none of that theory is scientific,
00:13:45 but once I start saying, can I see the ghost?
00:13:49 Why should there be a ghost?
00:13:50 And if there aren’t ghosts, what might I expect?
00:13:53 And I start to do different tests to see,
00:13:56 is this falsifiable?
00:13:59 Are there things that should happen if there are ghosts
00:14:01 or are there things that shouldn’t happen?
00:14:02 And do they, you know, what do I observe?
00:14:05 Now I’m thinking scientifically.
00:14:06 I don’t think of science as, wow, a picture of a black hole.
00:14:10 That’s just a photograph.
00:14:12 That’s an image.
00:14:13 That’s data.
00:14:13 That’s a sensory and perception experience.
00:14:16 Science is how we got that and how we understand it
00:14:19 and how we believe in it
00:14:20 and how we reduce our uncertainty around what it means.
00:14:24 But I would say I’m deeply within the scientific community
00:14:28 and I’m sometimes disheartened by the elitism
00:14:31 of the thinking, sort of not allowing yourself
00:14:34 to think outside the box.
00:14:36 So allowing the possibility
00:14:37 of going against the conventions of science,
00:14:40 I think is a beautiful part of some
00:14:43 of the greatest scientists in history.
00:14:46 I don’t know, I’m impressed by scientists every day
00:14:49 and revolutions in our knowledge of the world occur
00:14:58 only under very special circumstances.
00:15:00 It is very scary to challenge conventional thinking
00:15:05 and risky because let’s go back to elitism and ego, right?
00:15:10 If you just say, you know what?
00:15:11 I believe in the spirits of my body
00:15:14 and all forces are actually created by invisible creatures
00:15:17 that transfer themselves between objects.
00:15:22 If you ridicule every other theory
00:15:26 and say that you’re correct,
00:15:28 then ego gets involved and you just don’t go anywhere.
00:15:31 But fundamentally the question of well, what is a force
00:15:36 is incredibly important.
00:15:38 We need to have that conversation,
00:15:40 but it needs to be done in this very political way
00:15:42 of like, let’s be respectful of everyone
00:15:44 and let’s realize that we’re all learning together
00:15:46 and not shutting out other people.
00:15:49 And so when you look at a lot of revolutionary ideas,
00:15:54 they were not accepted right away.
00:15:57 And, you know, Galileo had a couple of problems
00:16:00 with the authorities and later thinkers, Descartes,
00:16:04 was like, all right, look, I kind of agree with Galileo,
00:16:06 but I’m gonna have to not say that.
00:16:11 I’ll have to create and invent and write different things
00:16:13 that keep me from being in trouble,
00:16:15 but we still slowly made progress.
00:16:17 Revolutions are difficult in all forms
00:16:19 and certainly in science.
00:16:20 Before we get to AI, on topic of revolutionary ideas,
00:16:23 let me ask on a Reddit AMA, you said that is the earth flat
00:16:28 is one of the favorite questions you’ve ever answered,
00:16:31 speaking of revolutionary ideas.
00:16:33 So your video on that, people should definitely watch,
00:16:37 is really fascinating.
00:16:39 Can you elaborate why you enjoyed
00:16:41 answering this question so much?
00:16:43 Yeah, well, it’s a long story.
00:16:45 I remember a long time ago,
00:16:49 I was living in New York at the time,
00:16:50 so it had to have been like 2009 or something.
00:16:54 I visited the Flat Earth forums
00:16:57 and this was before the Flat Earth theories
00:17:00 became as sort of mainstream as they are.
00:17:03 Sorry to ask the dumb question, forums, online forums.
00:17:06 Yeah, the Flat Earth Society,
00:17:09 I don’t know if it’s.com or.org, but I went there
00:17:11 and I was reading their ideas
00:17:14 and how they responded to typical criticisms of,
00:17:17 well, the earth isn’t flat because what about this?
00:17:20 And I could not tell, and I mentioned this in my video,
00:17:23 I couldn’t tell how many of these community members
00:17:28 actually believe the earth was flat or we’re just trolling.
00:17:32 And I realized that the fascinating thing is,
00:17:36 how do we know anything?
00:17:38 And what makes for a good belief
00:17:41 versus a maybe not so tenable or good belief?
00:17:45 And so that’s really what my video
00:17:47 about earth being flat is about.
00:17:49 It’s about, look, there are a lot of reasons
00:17:52 that the earth is probably not flat,
00:17:57 but a Flat Earth believer can respond
00:18:00 to every single one of them, but it’s all in an ad hoc way.
00:18:04 And all of these, all of their rebuttals
00:18:05 aren’t necessarily gonna form
00:18:07 a cohesive noncontradictory whole.
00:18:10 And I believe that’s the episode
00:18:12 where I talk about Occam’s razor
00:18:14 and Newton’s flaming laser sword.
00:18:17 And then I say, well, you know what, wait a second.
00:18:19 We know that space contracts as you move.
00:18:25 And so to a particle moving near the speed of light
00:18:27 towards earth, earth would be flattened
00:18:29 in the direction of that particles travel.
00:18:32 So to them, earth is flat.
00:18:35 Like we need to be really generous to even wild ideas
00:18:41 because they’re all thinking,
00:18:43 they’re all the communication of ideas.
00:18:45 And what else can it mean to be a human?
00:18:48 Yeah, and I think I’m a huge fan
00:18:50 of the Flat Earth theory, quote unquote,
00:18:54 in the sense that to me it feels harmless
00:18:57 to explore some of the questions
00:18:59 of what it means to believe something,
00:19:00 what it means to explore the edge of science and so on.
00:19:05 Cause it’s a harm, it’s a, to me,
00:19:07 nobody gets hurt whether the earth is flat or round,
00:19:09 not literally, but I mean intellectually
00:19:11 when we’re just having a conversation.
00:19:13 That said, again, to elitism,
00:19:15 I find that scientists roll their eyes
00:19:18 way too fast on the Flat Earth.
00:19:21 The kind of dismissal that I see to this even notion,
00:19:25 they haven’t like sat down and say,
00:19:27 what are the arguments that are being proposed?
00:19:30 And this is why these arguments are incorrect.
00:19:32 So that should be something
00:19:35 that scientists should always do,
00:19:37 even to the most sort of ideas that seem ridiculous.
00:19:42 So I like this as almost, it’s almost my test
00:19:45 when I ask people what they think about Flat Earth theory,
00:19:48 to see how quickly they roll their eyes.
00:19:51 Well, yeah, I mean, let me go on record
00:19:53 and say that the earth is not flat.
00:19:58 It is a three dimensional spheroid.
00:20:02 However, I don’t know that and it has not been proven.
00:20:07 Science doesn’t prove anything.
00:20:08 It just reduces uncertainty.
00:20:10 Could the earth actually be flat?
00:20:13 Extremely unlikely, extremely unlikely.
00:20:19 And so it is a ridiculous notion
00:20:21 if we care about how probable and certain our ideas might be.
00:20:26 But I think it’s incredibly important
00:20:28 to talk about science in that way
00:20:32 and to not resort to, well, it’s true.
00:20:35 It’s true in the same way that a mathematical theorem
00:20:39 is true.
00:20:41 And I think we’re kind of like being pretty pedantic
00:20:46 about defining this stuff.
00:20:48 But like, sure, I could take a rocket ship out
00:20:51 and I could orbit earth and look at it
00:20:53 and it would look like a ball, right?
00:20:56 But I still can’t prove that I’m not living in a simulation,
00:20:59 that I’m not a brain in a vat,
00:21:00 that this isn’t all an elaborate ruse
00:21:02 created by some technologically advanced
00:21:04 extraterrestrial civilization.
00:21:06 So there’s always some doubt and that’s fine.
00:21:11 That’s exciting.
00:21:12 And I think that kind of doubt, practically speaking,
00:21:14 is useful when you start talking about quantum mechanics
00:21:17 or string theory, sort of, it helps.
00:21:20 To me, that kind of adds a little spice
00:21:23 into the thinking process of scientists.
00:21:26 So, I mean, just as a thought experiment,
00:21:30 your video kind of, okay, say the earth is flat.
00:21:33 What would the forces when you walk about this flat earth
00:21:36 feel like to the human?
00:21:38 That’s a really nice thought experiment to think about.
00:21:40 Right, because what’s really nice about it
00:21:42 is that it’s a funny thought experiment,
00:21:45 but you actually wind up accidentally learning
00:21:48 a whole lot about gravity and about relativity
00:21:51 and geometry.
00:21:53 And I think that’s really the goal of what I’m doing.
00:21:56 I’m not trying to like convince people
00:21:57 that the earth is round.
00:21:58 I feel like you either believe that it is or you don’t
00:22:01 and like, that’s, you know, how can I change that?
00:22:04 What I can do is change how you think
00:22:06 and how you are introduced to important concepts.
00:22:10 Like, well, how does gravity operate?
00:22:13 Oh, it’s all about the center of mass of an object.
00:22:16 So right, on a sphere, we’re all pulled towards the middle,
00:22:19 essentially the centroid geometrically,
00:22:21 but on a disc, ooh, you’re gonna be pulled at a weird angle
00:22:24 if you’re out near the edge.
00:22:25 And that stuff’s fascinating.
00:22:28 Yeah, and to me, that was, that particular video
00:22:34 opened my eyes even more to what gravity is.
00:22:37 It’s just a really nice visualization tool of,
00:22:40 because you always imagine gravity with spheres,
00:22:43 with masses that are spheres.
00:22:44 Yeah.
00:22:45 And imagining gravity on masses that are not spherical,
00:22:48 some other shape, but in here, a plate, a flat object,
00:22:53 is really interesting.
00:22:54 It makes you really kind of visualize
00:22:56 in a three dimensional way the force of gravity.
00:22:57 Yeah, even if a disc the size of Earth would be impossible,
00:23:05 I think anything larger than like the moon basically
00:23:09 needs to be a sphere because gravity will round it out.
00:23:15 So you can’t have a teacup the size of Jupiter, right?
00:23:18 There’s a great book about the teacup in the universe
00:23:21 that I highly recommend.
00:23:22 I don’t remember the author.
00:23:24 I forget her name, but it’s a wonderful book.
00:23:26 So look it up.
00:23:28 I think it’s called Teacup in the Universe.
00:23:30 Just to link on this point briefly,
00:23:32 your videos are generally super, people love them, right?
00:23:37 If you look at the sort of number of likes versus dislikes
00:23:39 is this measure of YouTube, right, is incredible.
00:23:43 And as do I.
00:23:45 But this particular flat Earth video
00:23:48 has more dislikes than usual.
00:23:51 What do you, on that topic in general,
00:23:55 what’s your sense, how big is the community,
00:23:58 not just who believes in flat Earth,
00:24:00 but sort of the anti scientific community
00:24:03 that naturally distrust scientists in a way
00:24:08 that’s not an open minded way,
00:24:12 like really just distrust scientists
00:24:13 like they’re bought by some kind of mechanism
00:24:17 of some kind of bigger system
00:24:18 that’s trying to manipulate human beings.
00:24:21 What’s your sense of the size of that community?
00:24:24 You’re one of the sort of great educators in the world
00:24:28 that educates people on the exciting power of science.
00:24:34 So you’re kind of up against this community.
00:24:38 What’s your sense of it?
00:24:39 I really have no idea.
00:24:41 I haven’t looked at the likes and dislikes
00:24:44 on the flat Earth video.
00:24:45 And so I would wonder if it has a greater percentage
00:24:49 of dislikes than usual,
00:24:51 is that because of people disliking it
00:24:53 because they think that it’s a video
00:24:56 about Earth being flat and they find that ridiculous
00:25:01 and they dislike it without even really watching much?
00:25:04 Do they wish that I was more like dismissive
00:25:07 of flat Earth theories?
00:25:08 Yeah.
00:25:09 That’s possible too.
00:25:10 I know there are a lot of response videos
00:25:12 that kind of go through the episode and are pro flat Earth,
00:25:18 but I don’t know if there’s a larger community
00:25:21 of unorthodox thinkers today
00:25:25 than there have been in the past.
00:25:27 And I just wanna not lose them.
00:25:29 I want them to keep listening and thinking
00:25:32 and by calling them all idiots or something,
00:25:36 that does no good because how idiotic are they really?
00:25:41 I mean, the Earth isn’t a sphere at all.
00:25:45 We know that it’s an oblate spheroid
00:25:47 and that in and of itself is really interesting.
00:25:50 And I investigated that in which way is down
00:25:52 where I’m like, really down does not point
00:25:54 towards the center of the Earth.
00:25:56 It points in different direction,
00:25:58 depending on what’s underneath you and what’s above you
00:26:01 and what’s around you.
00:26:02 The whole universe is tugging on me.
00:26:06 And then you also show that gravity is non uniform
00:26:10 across the globe.
00:26:11 Like if you, there’s this I guess thought experiment
00:26:14 if you build a bridge all the way across the Earth
00:26:19 and then just knock out its pillars, what would happen?
00:26:23 And you describe how it would be like a very chaotic,
00:26:27 unstable thing that’s happening
00:26:28 because gravity is non uniform throughout the Earth.
00:26:31 Yeah, in small spaces, like the ones we work in,
00:26:36 we can essentially assume that gravity is uniform,
00:26:39 but it’s not.
00:26:40 It is weaker the further you are from the Earth.
00:26:43 And it also is going to be,
00:26:47 it’s radially pointed towards the middle of the Earth.
00:26:50 So a really large object will feel tidal forces
00:26:54 because of that non uniformness.
00:26:55 And we can take advantage of that with satellites, right?
00:26:58 Gravitational induced torque.
00:27:00 It’s a great way to align your satellite
00:27:01 without having to use fuel or any kind of engine.
00:27:05 So let’s jump back to it, artificial intelligence.
00:27:08 What’s your thought of the state of where we are at
00:27:11 currently with artificial intelligence
00:27:12 and what do you think it takes to build human level
00:27:15 or superhuman level intelligence?
00:27:17 I don’t know what intelligence means.
00:27:20 That’s my biggest question at the moment.
00:27:22 And I think it’s because my instinct is always to go,
00:27:25 well, what are the foundations here of our discussion?
00:27:28 What does it mean to be intelligent?
00:27:31 How do we measure the intelligence of an artificial machine
00:27:35 or a program or something?
00:27:37 Can we say that humans are intelligent?
00:27:39 Because there’s also a fascinating field
00:27:42 of how do you measure human intelligence.
00:27:44 Of course.
00:27:45 But if we just take that for granted,
00:27:47 saying that whatever this fuzzy intelligence thing
00:27:50 we’re talking about, humans kind of have it.
00:27:53 What would be a good test for you?
00:27:56 So during develop a test that’s natural language
00:27:59 conversation, would that impress you?
00:28:01 A chat bot that you’d want to hang out
00:28:03 and have a beer with for a bunch of hours
00:28:06 or have dinner plans with.
00:28:08 Is that a good test, natural language conversation?
00:28:10 Is there something else that would impress you?
00:28:12 Or is that also too difficult to think about?
00:28:13 Oh yeah, I’m pretty much impressed by everything.
00:28:16 I think that if there was a chat bot
00:28:20 that was like incredibly, I don’t know,
00:28:23 really had a personality.
00:28:24 And if I didn’t be the Turing test, right?
00:28:27 Like if I’m unable to tell that it’s not another person
00:28:33 but then I was shown a bunch of wires
00:28:36 and mechanical components.
00:28:39 And it was like, that’s actually what you’re talking to.
00:28:42 I don’t know if I would feel that guilty destroying it.
00:28:46 I would feel guilty because clearly it’s well made
00:28:49 and it’s a really cool thing.
00:28:51 It’s like destroying a really cool car or something
00:28:53 but I would not feel like I was a murderer.
00:28:56 So yeah, at what point would I start to feel that way?
00:28:58 And this is such a subjective psychological question.
00:29:02 If you give it movement or if you have it act as though
00:29:07 or perhaps really feel pain as I destroy it
00:29:11 and scream and resist, then I’d feel bad.
00:29:15 Yeah, it’s beautifully put.
00:29:16 And let’s just say act like it’s a pain.
00:29:20 So if you just have a robot that not screams,
00:29:25 just like moans in pain if you kick it,
00:29:28 that immediately just puts it in a class
00:29:30 that we humans, it becomes, we anthropomorphize it.
00:29:35 It almost immediately becomes human.
00:29:37 So that’s a psychology question
00:29:39 as opposed to sort of a physics question.
00:29:40 Right, I think that’s a really good instinct to have.
00:29:43 If the robot.
00:29:45 Screams.
00:29:46 Screams and moans, even if you don’t believe
00:29:50 that it has the mental experience,
00:29:52 the qualia of pain and suffering,
00:29:55 I think it’s still a good instinct to say,
00:29:56 you know what, I’d rather not hurt it.
00:29:59 The problem is that instinct can get us in trouble
00:30:02 because then robots can manipulate that.
00:30:05 And there’s different kinds of robots.
00:30:08 There’s robots like the Facebook and the YouTube algorithm
00:30:10 that recommends the video,
00:30:11 and they can manipulate in the same kind of way.
00:30:14 Well, let me ask you just to stick
00:30:16 on artificial intelligence for a second.
00:30:17 Do you have worries about existential threats from AI
00:30:21 or existential threats from other technologies
00:30:23 like nuclear weapons that could potentially destroy life
00:30:27 on earth or damage it to a very significant degree?
00:30:31 Yeah, of course I do.
00:30:32 Especially the weapons that we create.
00:30:35 There’s all kinds of famous ways to think about this.
00:30:38 And one is that, wow, what if we don’t see
00:30:41 advanced alien civilizations because of the danger
00:30:46 of technology?
00:30:50 What if we reach a point,
00:30:51 and I think there’s a channel, Thoughty2,
00:30:55 geez, I wish I remembered the name of the channel,
00:30:57 but he delves into this kind of limit
00:30:59 of maybe once you discover radioactivity and its power,
00:31:04 you’ve reached this important hurdle.
00:31:07 And the reason that the skies are so empty
00:31:09 is that no one’s ever managed to survive as a civilization
00:31:13 once they have that destructive power.
00:31:16 And when it comes to AI, I’m not really very worried
00:31:22 because I think that there are plenty of other people
00:31:24 that are already worried enough.
00:31:26 And oftentimes these worries are just,
00:31:30 they just get in the way of progress.
00:31:32 And they’re questions that we should address later.
00:31:37 And I think I talk about this in my interview
00:31:41 with the self driving autonomous vehicle guy,
00:31:47 as I think it was a bonus scene
00:31:48 from the trolley problem episode.
00:31:52 And I’m like, wow, what should a car do
00:31:54 if this really weird contrived scenario happens
00:31:56 where it has to swerve and save the driver, but kill a kid?
00:32:00 And he’s like, well, what would a human do?
00:32:03 And if we resist technological progress
00:32:07 because we’re worried about all of these little issues,
00:32:10 then it gets in the way.
00:32:11 And we shouldn’t avoid those problems,
00:32:14 but we shouldn’t allow them to be stumbling blocks
00:32:16 to advancement.
00:32:18 So the folks like Sam Harris or Elon Musk
00:32:22 are saying that we’re not worried enough.
00:32:24 So the worry should not paralyze technological progress,
00:32:28 but we’re sort of marching,
00:32:30 technology is marching forward without the key scientists,
00:32:35 the developing of technology,
00:32:37 worrying about the overnight having some effects
00:32:42 that would be very detrimental to society.
00:32:45 So to push back on your thought of the idea
00:32:49 that there’s enough people worrying about it,
00:32:51 Elon Musk says, there’s not enough people
00:32:53 worrying about it.
00:32:54 That’s the kind of balance is,
00:32:58 it’s like folks who are really focused
00:33:01 on nuclear deterrence are saying
00:33:03 there’s not enough people worried
00:33:04 about nuclear deterrence, right?
00:33:06 So it’s an interesting question of what is a good threshold
00:33:10 of people to worry about these?
00:33:12 And if it’s too many people that are worried, you’re right.
00:33:15 It’ll be like the press would over report on it
00:33:18 and there’ll be technological, halt technological progress.
00:33:21 If not enough, then we can march straight ahead
00:33:24 into that abyss that human beings might be destined for
00:33:29 with the progress of technology.
00:33:31 Yeah, I don’t know what the right balance is
00:33:33 of how many people should be worried
00:33:35 and how worried should they be,
00:33:36 but we’re always worried about new technology.
00:33:40 We know that Plato was worried about the written word.
00:33:42 He was like, we shouldn’t teach people to write
00:33:45 because then they won’t use their minds to remember things.
00:33:48 There have been concerns over technology
00:33:51 and its advancement since the beginning of recorded history.
00:33:55 And so, I think, however,
00:33:58 these conversations are really important to have
00:34:01 because again, we learn a lot about ourselves.
00:34:03 If we’re really scared of some kind of AI
00:34:06 like coming into being that is conscious or whatever
00:34:09 and can self replicate, we already do that every day.
00:34:13 It’s called humans being born.
00:34:14 They’re not artificial, they’re humans,
00:34:17 but they’re intelligent and I don’t wanna live in a world
00:34:20 where we’re worried about babies being born
00:34:21 because what if they become evil?
00:34:24 Right.
00:34:25 What if they become mean people?
00:34:25 What if they’re thieves?
00:34:27 Maybe we should just like, what, not have babies born?
00:34:31 Like maybe we shouldn’t create AI.
00:34:33 It’s like, we will want to have safeguards in place
00:34:39 in the same way that we know, look,
00:34:41 a kid could be born that becomes some kind of evil person,
00:34:44 but we have laws, right?
00:34:47 And it’s possible that with advanced genetics in general,
00:34:51 be able to, it’s a scary thought to say that,
00:34:58 this, my child, if born would have an 83% chance
00:35:05 of being a psychopath, right?
00:35:08 Like being able to, if it’s something genetic,
00:35:11 if there’s some sort of, and what to use that information,
00:35:15 what to do with that information
00:35:16 is a difficult ethical thought.
00:35:20 Yeah, and I’d like to find an answer that isn’t,
00:35:22 well, let’s not have them live.
00:35:24 You know, I’d like to find an answer that is,
00:35:26 well, all human life is worthy.
00:35:30 And if you have an 83% chance of becoming a psychopath,
00:35:33 well, you still deserve dignity.
00:35:38 And you still deserve to be treated well.
00:35:42 You still have rights.
00:35:43 At least at this part of the world, at least in America,
00:35:45 there’s a respect for individual life in that way.
00:35:49 That’s, well, to me, but again, I’m in this bubble,
00:35:54 is a beautiful thing.
00:35:55 But there’s other cultures where individual human life
00:35:59 is not that important, where a society,
00:36:02 so I was born in the Soviet Union,
00:36:04 where the strength of nation and society together
00:36:07 is more important than any one particular individual.
00:36:10 So it’s an interesting also notion,
00:36:12 the stories we tell ourselves.
00:36:13 I like the one where individuals matter,
00:36:16 but it’s unclear that that was what the future holds.
00:36:19 Well, yeah, and I mean, let me even throw this out.
00:36:21 Like, what is artificial intelligence?
00:36:23 How can it be artificial?
00:36:25 I really think that we get pretty obsessed
00:36:28 and stuck on the idea that there is some thing
00:36:30 that is a wild human, a pure human organism
00:36:34 without technology.
00:36:35 But I don’t think that’s a real thing.
00:36:37 I think that humans and human technology are one organism.
00:36:42 Look at my glasses, okay?
00:36:44 If an alien came down and saw me,
00:36:47 would they necessarily know that this is an invention,
00:36:50 that I don’t grow these organically from my body?
00:36:53 They wouldn’t know that right away.
00:36:55 And the written word, and spoons, and cups,
00:37:00 these are all pieces of technology.
00:37:02 We are not alone as an organism.
00:37:06 And so the technology we create,
00:37:09 whether it be video games or artificial intelligence
00:37:11 that can self replicate and hate us,
00:37:14 it’s actually all the same organism.
00:37:16 When you’re in a car, where do you end in the car begin?
00:37:19 It seems like a really easy question to answer,
00:37:21 but the more you think about it,
00:37:22 the more you realize, wow,
00:37:23 we are in this symbiotic relationship with our inventions.
00:37:27 And there are plenty of people who are worried about it.
00:37:30 And there should be,
00:37:30 but it’s inevitable.
00:37:32 And I think that even just us think of ourselves
00:37:35 as individual intelligences may be silly notion
00:37:41 because it’s much better to think
00:37:44 of the entirety of human civilization.
00:37:46 All living organisms on earth is a single living organism.
00:37:50 As a single intelligent creature,
00:37:52 because you’re right, everything’s intertwined.
00:37:54 Everything is deeply connected.
00:37:57 So we mentioned, you know,
00:37:57 Musk, so you’re a curious lover of science.
00:38:03 What do you think of the efforts that Elon Musk is doing
00:38:06 with space exploration, with electric vehicles,
00:38:10 with autopilot, sort of getting into the space
00:38:13 of autonomous vehicles, with boring under LA
00:38:17 and a Neuralink trying to communicate brain machine
00:38:21 interfaces, communicate between machines
00:38:24 and human brains?
00:38:28 Well, it’s really inspiring.
00:38:30 I mean, look at the fandom that he’s amassed.
00:38:34 It’s not common for someone like that
00:38:39 to have such a following.
00:38:40 And so it’s… Engineering nerd.
00:38:42 Yeah, so it’s really exciting.
00:38:44 But I also think that a lot of responsibility
00:38:46 comes with that kind of power.
00:38:47 So like if I met him, I would love to hear how he feels
00:38:50 about the responsibility he has.
00:38:53 When there are people who are such a fan of your ideas
00:38:59 and your dreams and share them so closely with you,
00:39:04 you have a lot of power.
00:39:06 And he didn’t always have that, you know?
00:39:09 He wasn’t born as Elon Musk.
00:39:11 Well, he was, but well, he was named that later.
00:39:13 But the point is that I wanna know the psychology
00:39:18 of becoming a figure like him.
00:39:23 Well, I don’t even know how to phrase the question right,
00:39:25 but it’s a question about what do you do
00:39:27 when you’re following, your fans become so large
00:39:35 that it’s almost bigger than you.
00:39:37 And how do you responsibly manage that?
00:39:41 And maybe it doesn’t worry him at all.
00:39:42 And that’s fine too.
00:39:43 But I’d be really curious.
00:39:45 And I think there are a lot of people that go through this
00:39:47 when they realize, whoa, there are a lot of eyes on me.
00:39:50 There are a lot of people who really take what I say
00:39:53 very earnestly and take it to heart and will defend me.
00:39:57 And whew, that’s, that’s, that can be dangerous.
00:40:04 And you have to be responsible with it.
00:40:07 Both in terms of impact on society
00:40:09 and psychologically for the individual,
00:40:11 just the burden psychologically on Elon?
00:40:15 Yeah, yeah, how does he think about that?
00:40:18 Part of his persona.
00:40:21 Well, let me throw that right back at you
00:40:23 because in some ways you’re just a funny guy
00:40:28 that gotten a humongous following,
00:40:31 a funny guy with a curiosity.
00:40:34 You’ve got a huge following.
00:40:36 How do you psychologically deal with the responsibility?
00:40:40 In many ways you have a reach
00:40:41 in many ways bigger than Elon Musk.
00:40:44 What is your, what is the burden that you feel in educating
00:40:49 being one of the biggest educators in the world
00:40:51 where everybody’s listening to you
00:40:53 and actually everybody, like most of the world
00:40:58 that’s uses YouTube for educational material,
00:41:01 trust you as a source of good, strong scientific thinking.
00:41:07 It’s a burden and I try to approach it
00:41:11 with a lot of humility and sharing.
00:41:16 Like I’m not out there doing
00:41:18 a lot of scientific experiments.
00:41:20 I am sharing the work of real scientists
00:41:23 and I’m celebrating their work and the way that they think
00:41:26 and the power of curiosity.
00:41:29 But I wanna make it clear at all times that like,
00:41:32 look, we don’t know all the answers
00:41:35 and I don’t think we’re ever going to reach a point
00:41:37 where we’re like, wow, and there you go.
00:41:39 That’s the universe.
00:41:40 It’s this equation, you plug in some conditions or whatever
00:41:43 and you do the math
00:41:44 and you know what’s gonna happen tomorrow.
00:41:46 I don’t think we’re ever gonna reach that point,
00:41:47 but I think that there is a tendency
00:41:51 to sometimes believe in science and become elitist
00:41:56 and become, I don’t know, hard when in reality
00:41:58 it should humble you and make you feel smaller.
00:42:01 I think there’s something very beautiful
00:42:03 about feeling very, very small and very weak
00:42:07 and to feel that you need other people.
00:42:10 So I try to keep that in mind and say,
00:42:13 look, thanks for watching.
00:42:14 Vsauce is not, I’m not Vsauce, you are.
00:42:16 When I start the episodes, I say,
00:42:18 hey, Vsauce, Michael here.
00:42:20 Vsauce and Michael are actually a different thing
00:42:22 in my mind.
00:42:23 I don’t know if that’s always clear,
00:42:24 but yeah, I have to approach it that way
00:42:26 because it’s not about me.
00:42:30 Yeah, so it’s not even,
00:42:31 you’re not feeling the responsibility.
00:42:33 You’re just sort of plugging into this big thing
00:42:36 that is scientific exploration of our reality
00:42:40 and you’re a voice that represents a bunch,
00:42:42 but you’re just plugging into this big Vsauce ball
00:42:47 that others, millions of others are plugged into.
00:42:49 Yeah, and I’m just hoping to encourage curiosity
00:42:53 and responsible thinking
00:42:56 and an embracement of doubt
00:43:01 and being okay with that.
00:43:05 So I’m next week talking to Christos Gudrow.
00:43:08 I’m not sure if you’re familiar who he is,
00:43:09 but he’s the VP of engineering,
00:43:11 head of the quote unquote YouTube algorithm
00:43:14 or the search and discovery.
00:43:16 So let me ask, first high level,
00:43:20 do you have a question for him
00:43:25 that if you can get an honest answer that you would ask,
00:43:28 but more generally,
00:43:30 how do you think about the YouTube algorithm
00:43:32 that drives some of the motivation behind,
00:43:36 no, some of the design decisions you make
00:43:38 as you ask and answer some of the questions you do,
00:43:42 how would you improve this algorithm in your mind in general?
00:43:45 So just what would you ask him?
00:43:47 And outside of that,
00:43:49 how would you like to see the algorithm improve?
00:43:52 Well, I think of the algorithm as a mirror.
00:43:56 It reflects what people put in
00:43:58 and we don’t always like what we see in that mirror.
00:44:01 From the individual mirror
00:44:02 to the individual mirror to the society.
00:44:05 Both, in the aggregate,
00:44:07 it’s reflecting back what people on average want to watch.
00:44:11 And when you see things being recommended to you,
00:44:15 it’s reflecting back what it thinks you want to see.
00:44:19 And specifically, I would guess that it’s
00:44:22 not just what you want to see,
00:44:23 but what you will click on
00:44:25 and what you will watch some of and stay on YouTube
00:44:30 because of.
00:44:32 I don’t think that, this is all me guessing,
00:44:34 but I don’t think that YouTube cares
00:44:38 if you only watch like a second of a video,
00:44:41 as long as the next thing you do is open another video.
00:44:45 If you close the app or close the site,
00:44:49 that’s a problem for them
00:44:50 because they’re not a subscription platform.
00:44:52 They’re not like, look,
00:44:53 you’re giving us 20 bucks a month no matter what,
00:44:56 so who cares?
00:44:57 They need you to watch and spend time there and see ads.
00:45:02 So one of the things I’m curious about
00:45:03 whether they do consider longer term sort of develop,
00:45:09 your longer term development as a human being,
00:45:12 which I think ultimately will make you feel better
00:45:15 about using YouTube in the longterm
00:45:17 and allowing you to stick with it for longer.
00:45:19 Because even if you feed the dopamine rush in the short term
00:45:23 and you keep clicking on cat videos,
00:45:25 eventually you sort of wake up like from a drug
00:45:28 and say, I need to quit this.
00:45:30 So I wonder how much you’re trying to optimize
00:45:32 for the longterm because when I look at the,
00:45:35 your videos aren’t exactly sort of, no offense,
00:45:39 but they’re not the most clickable.
00:45:41 They’re both the most clickable
00:45:44 and I feel I watched the entire thing
00:45:47 and I feel a better human after I watched it, right?
00:45:49 So like they’re not just optimizing for the clickability
00:45:54 because I hope, so my thought is how do you think of it?
00:45:59 And does it affect your own content?
00:46:02 Like how deep you go,
00:46:03 how profound you explore the directions and so on.
00:46:07 I’ve been really lucky in that I don’t worry too much
00:46:11 about the algorithm.
00:46:12 I mean, look at my thumbnails.
00:46:13 I don’t really go too wild with them.
00:46:17 And with minefield where I’m in partnership with YouTube
00:46:19 on the thumbnails, I’m often like, let’s pull this back.
00:46:22 Let’s be mysterious.
00:46:23 But usually I’m just trying to do
00:46:25 what everyone else is not doing.
00:46:27 So if everyone’s doing crazy Photoshop kind of thumbnails,
00:46:30 I’m like, what if the thumbnails just a line?
00:46:34 And what if the title is just a word?
00:46:37 And I kind of feel like all of the Vsauce channels
00:46:41 have cultivated an audience that expects that.
00:46:43 And so they would rather Jake make a video
00:46:45 that’s just called stains than one called,
00:46:48 I explored stains, shocking.
00:46:50 But there are other audiences out there that want that.
00:46:53 And I think most people kind of want
00:46:57 what you see the algorithm favoring,
00:46:58 which is mainstream traditional celebrity
00:47:02 and news kind of information.
00:47:03 I mean, that’s what makes YouTube really different
00:47:05 than other streaming platforms.
00:47:06 No one’s like, what’s going on in the world?
00:47:08 I’ll open up Netflix to find out.
00:47:10 But you do open up Twitter to find that out.
00:47:12 You open up Facebook and you can open up YouTube
00:47:14 because you’ll see that the trending videos
00:47:16 are like what happened amongst the traditional mainstream
00:47:20 people in different industries.
00:47:22 And that’s what’s being shown.
00:47:24 And it’s not necessarily YouTube saying,
00:47:27 we want that to be what you see.
00:47:29 It’s that that’s what people click on.
00:47:31 When they see Ariana Grande, you know,
00:47:33 reads a love letter from like her high school sweetheart,
00:47:36 they’re like, I wanna see that.
00:47:38 And when they see a video from me
00:47:39 that’s got some lines in math and it’s called law and causes
00:47:41 they’re like, well, I mean, I’m just on the bus.
00:47:45 Like I don’t have time to dive into a whole lesson.
00:47:48 So, you know, before you get super mad at YouTube,
00:47:52 you should say, really,
00:47:53 they’re just reflecting back human behavior.
00:47:55 Is there something you would improve about the algorithm
00:48:00 knowing of course, that as far as we’re concerned,
00:48:02 it’s a black box, so we don’t know how it works.
00:48:04 Right, and I don’t think that even anyone at YouTube
00:48:06 really knows what it’s doing.
00:48:07 They know what they’ve tweaked, but then it learns.
00:48:09 I think that it learns and it decides how to behave.
00:48:13 And sometimes the YouTube employees are left going,
00:48:16 I don’t know.
00:48:17 Maybe we should like change the value
00:48:19 of how much it, you know, worries about watch time.
00:48:22 And maybe it should worry more about something else.
00:48:24 I don’t know.
00:48:25 But I mean, I would like to see,
00:48:28 I don’t know what they’re doing and not doing.
00:48:30 Well, is there a conversation
00:48:32 that you think they should be having just internally,
00:48:35 whether they’re having it or not?
00:48:37 Is there something,
00:48:38 should they be thinking about the longterm future?
00:48:41 Should they be thinking about educational content
00:48:44 and whether that’s educating about what just happened
00:48:48 in the world today, news or educational content,
00:48:50 like what you’re providing,
00:48:51 which is asking big sort of timeless questions
00:48:54 about how the way the world works.
00:48:56 Well, it’s interesting.
00:48:58 What should they think about?
00:48:59 Because it’s called YouTube, not our tube.
00:49:02 And that’s why I think they have
00:49:04 so many phenomenal educational creators.
00:49:08 You don’t have shows like Three Blue One Brown
00:49:11 or Physics Girl or Looking Glass Universe or Up and Atom
00:49:14 or Brain Scoop or, I mean, I could go on and on.
00:49:18 They aren’t on Amazon Prime and Netflix
00:49:21 and they don’t have commissioned shows from those platforms.
00:49:24 It’s all organically happening
00:49:25 because there are people out there
00:49:26 that want to share their passion for learning,
00:49:30 that wanna share their curiosity.
00:49:32 And YouTube could promote those kinds of shows more,
00:49:37 but first of all, they probably wouldn’t get as many clicks
00:49:43 and YouTube needs to make sure that the average user
00:49:45 is always clicking and staying on the site.
00:49:47 They could still promote it more for the good of society,
00:49:51 but then we’re making some really weird claims
00:49:52 about what’s good for society
00:49:53 because I think that cat videos
00:49:55 are also an incredibly important part
00:49:58 of what it means to be a human.
00:50:00 I mentioned this quote before from Unumuno about,
00:50:02 look, I’ve seen a cat like estimate distances
00:50:05 and calculate a jump more often than I’ve seen a cat cry.
00:50:09 And so things that play with our emotions
00:50:12 and make us feel things can be cheesy and can feel cheap,
00:50:15 but like, man, that’s very human.
00:50:18 And so even the dumbest vlog is still so important
00:50:23 that I don’t think I have a better claim to take its spot
00:50:27 than it has to have that spot.
00:50:29 It puts a mirror to us, the beautiful parts,
00:50:32 the ugly parts, the shallow parts, the deep parts.
00:50:35 You’re right.
00:50:36 What I would like to see is,
00:50:39 I miss the days when engaging with content on YouTube
00:50:43 helped push it into my subscribers timelines.
00:50:47 It used to be that when I liked a video,
00:50:49 say from Veritasium, it would show up in the feed
00:50:54 on the front page of the app or the website of my subscribers.
00:50:58 And I knew that if I liked a video,
00:51:00 I could send it 100,000 views or more.
00:51:03 That no longer is true,
00:51:05 but I think that was a good user experience.
00:51:07 When I subscribe to someone, when I’m following them,
00:51:09 I want to see more of what they like.
00:51:13 I want them to also curate the feed for me.
00:51:15 And I think that Twitter and Facebook are doing that
00:51:17 in also some ways that are kind of annoying,
00:51:20 but I would like that to happen more.
00:51:22 And I think we would see communities being stronger
00:51:25 on YouTube if it was that way instead of YouTube going,
00:51:27 well, technically Michael liked this Veritasium video,
00:51:29 but people are way more likely to click on Carpool Karaoke.
00:51:33 So I don’t even care who they are, just give them that.
00:51:36 Not saying anything against Carpool Karaoke,
00:51:38 that is a extremely important part of our society,
00:51:43 what it means to be a human on earth, you know, but.
00:51:46 I’ll say it sucks, but.
00:51:48 Yeah, but a lot of people would disagree with you
00:51:51 and they should be able to see as much of that as they want.
00:51:53 And I think even people who don’t think they like it
00:51:55 should still be really aware of it
00:51:57 because it’s such an important thing.
00:51:59 It’s such an influential thing.
00:52:00 But yeah, I just wish that like new channels I discover
00:52:03 and that I subscribe to,
00:52:04 I wish that my subscribers found out about that
00:52:06 because especially in the education community,
00:52:10 a rising tide floats all boats.
00:52:11 If you watch a video from Numberphile,
00:52:14 you’re just more likely to want to watch an episode from me,
00:52:16 whether it be on Vsauce1 or Ding.
00:52:18 It’s not competitive in the way that traditional TV was
00:52:21 where it’s like, well, if you tune into that show,
00:52:23 it means you’re not watching mine
00:52:24 because they both air at the same time.
00:52:26 So helping each other out through collaborations
00:52:29 takes a lot of work,
00:52:30 but just through engaging, commenting on their videos,
00:52:32 liking their videos, subscribing to them, whatever,
00:52:36 that I would love to see become easier and more powerful.
00:52:41 So a quick and impossibly deep question,
00:52:46 last question about mortality.
00:52:48 You’ve spoken about death as an interesting topic.
00:52:52 Do you think about your own mortality?
00:52:55 Yeah, every day, it’s really scary.
00:52:59 So what do you think is the meaning of life
00:53:04 that mortality makes very explicit?
00:53:07 So why are you here on earth, Michael?
00:53:12 What’s the point of this whole thing?
00:53:18 What does mortality in the context of the whole universe
00:53:23 make you realize about yourself?
00:53:26 Just you, Michael Stevens.
00:53:29 Well, it makes me realize
00:53:31 that I am destined to become a notion.
00:53:35 I’m destined to become a memory and we can extend life.
00:53:39 I think there’s really exciting things being done
00:53:42 to extend life,
00:53:43 but we still don’t know how to protect you
00:53:47 from some accident that could happen,
00:53:48 some unforeseen thing.
00:53:50 Maybe we could save my connectome
00:53:54 and recreate my consciousness digitally,
00:53:56 but even that could be lost
00:54:00 if it’s stored on a physical medium or something.
00:54:02 So basically, I just think that embracing
00:54:07 and realizing how cool it is,
00:54:09 that someday I will just be an idea.
00:54:11 And there won’t be a Michael anymore
00:54:13 that can be like, no, that’s not what I meant.
00:54:16 It’ll just be what people,
00:54:17 they have to guess what I meant.
00:54:19 And they’ll remember me
00:54:23 and how I live on as that memory
00:54:25 will maybe not even be who I want it to be.
00:54:29 But there’s something powerful about that.
00:54:31 And there’s something powerful
00:54:32 about letting future people run the show themselves.
00:54:38 I think I’m glad to get out of their way at some point
00:54:41 and say, all right, it’s your world now.
00:54:43 So you, the physical entity, Michael,
00:54:47 have ripple effects in the space of ideas
00:54:50 that far outlives you in ways that you can’t control,
00:54:54 but it’s nevertheless fascinating to think,
00:54:56 I mean, especially with you,
00:54:57 you can imagine an alien species
00:54:59 when they finally arrive and destroy all of us
00:55:01 would watch your videos to try to figure out
00:55:04 what were the questions that these people.
00:55:05 But even if they didn’t,
00:55:08 I still think that there will be ripples.
00:55:11 Like when I say memory,
00:55:13 I don’t specifically mean people remember my name
00:55:16 and my birth date and like there’s a photo of me
00:55:18 on Wikipedia, like all that can be lost,
00:55:20 but I still would hope that people ask questions
00:55:23 and teach concepts in some of the ways
00:55:26 that I have found useful and satisfying.
00:55:28 Even if they don’t know that I was the one
00:55:29 who tried to popularize it, that’s fine.
00:55:32 But if Earth was completely destroyed,
00:55:35 like burnt to a crisp, everything on it today,
00:55:39 what would, the universe wouldn’t care.
00:55:42 Like Jupiter’s not gonna go, oh no, and that could happen.
00:55:49 So we do however have the power to launch things
00:55:55 into space to try to extend how long our memory exists.
00:56:02 And what I mean by that is,
00:56:04 we are recording things about the world
00:56:06 and we’re learning things and writing stories
00:56:08 and all of this and preserving that is truly
00:56:13 what I think is the essence of being a human.
00:56:16 We are autobiographers of the universe
00:56:20 and we’re really good at it.
00:56:21 We’re better than fossils.
00:56:22 We’re better than light spectrum.
00:56:25 We’re better than any of that.
00:56:26 We collect much more detailed memories
00:56:29 of what’s happening, much better data.
00:56:32 And so that should be our legacy.
00:56:37 And I hope that that’s kind of mine too
00:56:40 in terms of people remembering something
00:56:42 or having some kind of effect.
00:56:44 But even if I don’t, you can’t not have an effect.
00:56:47 This is not me feeling like,
00:56:49 I hope that I have this powerful legacy.
00:56:50 It’s like, no matter who you are, you will.
00:56:53 But you also have to embrace the fact
00:56:57 that that impact might look really small and that’s okay.
00:57:01 One of my favorite quotes is from Tessa the Durbervilles.
00:57:04 And it’s along the lines of the measure of your life
00:57:08 depends on not your external displacement
00:57:10 but your subjective experience.
00:57:13 If I am happy and those that I love are happy,
00:57:16 can that be enough?
00:57:17 Because if so, excellent.
00:57:21 I think there’s no better place to end it, Michael.
00:57:23 Thank you so much.
00:57:24 It was an honor to meet you.
00:57:25 Thanks for talking to me.
00:57:25 Thank you, it was a pleasure.
00:57:27 Thanks for listening to this conversation
00:57:29 with Michael Stevens.
00:57:30 And thank you to our presenting sponsor, Cash App.
00:57:33 Download it, use code LexPodcast,
00:57:35 you’ll get $10 and $10 will go to First,
00:57:38 a STEM education nonprofit that inspires
00:57:41 hundreds of thousands of young minds
00:57:43 to learn, to dream of engineering our future.
00:57:47 If you enjoy this podcast, subscribe on YouTube,
00:57:49 give it five stars on Apple Podcast,
00:57:51 support it on Patreon, or connect with me on Twitter.
00:57:55 And now, let me leave you with some words of wisdom
00:57:58 from Albert Einstein.
00:58:00 The important thing is not to stop questioning.
00:58:03 Curiosity has its own reason for existence.
00:58:06 One cannot help but be in awe
00:58:09 when he contemplates the mysteries of eternity,
00:58:11 of life, the marvelous structure of reality.
00:58:14 It is enough if one tries merely to comprehend
00:58:18 a little of this mystery every day.
00:58:20 Thank you for listening and hope to see you next time.