Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot #49

Transcript

00:00:00 The following is a conversation with Elon Musk, Part 2, the second time we spoke on the podcast,

00:00:07 with parallels, if not in quality, than an outfit, to the objectively speaking greatest

00:00:13 sequel of all time, Godfather Part 2. As many people know, Elon Musk is a leader of Tesla,

00:00:20 SpaceX, Neuralink, and the Boring Company. What may be less known is that he’s a world

00:00:26 class engineer and designer, constantly emphasizing first principles thinking and taking on big

00:00:32 engineering problems that many before him will consider impossible. As scientists and engineers,

00:00:39 most of us don’t question the way things are done, we simply follow the momentum of the crowd.

00:00:44 But revolutionary ideas that change the world on the small and large scales happen when you

00:00:51 return to the fundamentals and ask, is there a better way? This conversation focuses on the

00:00:57 incredible engineering and innovation done in brain computer interfaces at Neuralink.

00:01:04 This work promises to help treat neurobiological diseases to help us further understand the

00:01:09 connection between the individual neuron to the high level function of the human brain.

00:01:14 And finally, to one day expand the capacity of the brain through two way communication

00:01:20 with computational devices, the internet, and artificial intelligence systems.

00:01:25 This is the Artificial Intelligence Podcast. If you enjoy it, subscribe by YouTube,

00:01:31 Apple Podcasts, Spotify, support on Patreon, or simply connect with me on Twitter

00:01:36 at Lex Friedman, spelled F R I D M A N. And now, as an anonymous YouTube commenter referred to

00:01:43 our previous conversation as the quote, historical first video of two robots conversing without

00:01:49 supervision, here’s the second time, the second conversation with Elon Musk.

00:01:57 Let’s start with an easy question about consciousness. In your view, is consciousness

00:02:03 something that’s unique to humans or is it something that permeates all matter, almost like

00:02:07 a fundamental force of physics? I don’t think consciousness permeates all matter. Panpsychists

00:02:13 believe that. Yeah. There’s a philosophical. How would you tell? That’s true. That’s a good point.

00:02:21 I believe in scientific methods. I don’t know about your mind or anything, but the scientific

00:02:24 method is like, if you cannot test the hypothesis, then you cannot reach meaningful conclusion that

00:02:28 it is true. Do you think consciousness, understanding consciousness is within the

00:02:34 reach of science of the scientific method? We can dramatically improve our understanding of

00:02:40 consciousness. You know, hot press to say that we understand anything with complete accuracy,

00:02:47 but can we dramatically improve our understanding of consciousness? I believe the answer is yes.

00:02:53 Does an AI system in your view have to have consciousness in order to achieve human level

00:02:58 or superhuman level intelligence? Does it need to have some of these human qualities that

00:03:03 consciousness, maybe a body, maybe a fear of mortality, capacity to love those kinds of

00:03:11 silly human things? There’s a different, you know, there’s this, there’s the scientific method,

00:03:19 which I very much believe in where something is true to the degree that it is testably. So

00:03:25 and otherwise, you’re really just talking about, you know, preferences or untestable beliefs or

00:03:34 that, you know, that kind of thing. So it ends up being somewhat of a semantic question, where

00:03:42 we were conflating a lot of things with the word intelligence. If we parse them out and say,

00:03:46 you know, are we headed towards the future where an AI will be able to outthink us in every way?

00:03:57 Then the answer is unequivocally yes.

00:04:01 In order for an AI system that needs to outthink us in every way, it also needs to have

00:04:07 a capacity to have consciousness, self awareness, and understanding.

00:04:12 It will be self aware. Yes, that’s different from consciousness. I mean, to me, in terms of what

00:04:18 what consciousness feels like, it feels like consciousness is in a different dimension.

00:04:22 But this is this could be just an illusion. You know, if you damage your brain in some way,

00:04:30 physically, you get you, you damage your consciousness, which implies that consciousness

00:04:35 is a physical phenomenon. And in my view, the thing is that that I think are really quite,

00:04:42 quite likely is that digital intelligence will be able to outthink us in every way, and it will

00:04:48 simply be able to simulate what we consider consciousness. So to the degree that you would

00:04:54 not be able to tell the difference. And from the from the aspect of the scientific method,

00:04:58 it’s might as well be consciousness, if we can simulate it perfectly.

00:05:01 If you can’t tell the difference, when this is sort of the Turing test, but think of a more

00:05:06 sort of advanced version of the Turing test. If you’re if you’re talking to a digital super

00:05:13 intelligence and can’t tell if that is a computer or a human, like let’s say you’re just having

00:05:19 conversation over a phone or a video conference or something where you’re you think you’re talking

00:05:26 looks like a person makes all of the right inflections and movements and all the small

00:05:33 subtleties that constitute a human and talks like human makes mistakes like a human like

00:05:42 and you literally just can’t tell is this Are you video conferencing with a person or or an AI

00:05:49 might as well might as well be human. So on a darker topic, you’ve expressed serious concern

00:05:54 about existential threats of AI. It’s perhaps one of the greatest challenges our civilization faces,

00:06:02 but since I would say we’re kind of an optimistic descendants of apes, perhaps we can find several

00:06:08 paths of escaping the harm of AI. So if I can give you an example of an example of an example

00:06:16 of escaping the harm of AI. So if I can give you three options, maybe you can comment which do you

00:06:21 think is the most promising. So one is scaling up efforts on AI safety and beneficial AI research

00:06:29 in hope of finding an algorithmic or maybe a policy solution. Two is becoming a multi planetary

00:06:35 species as quickly as possible. And three is merging with AI and riding the wave of that

00:06:44 increasing intelligence as it continuously improves. What do you think is most promising,

00:06:49 most interesting, as a civilization that we should invest in?

00:06:54 I think there’s a lot of tremendous amount of investment going on in AI, where there’s a lack

00:06:59 of investment is in AI safety. And there should be in my view, a government agency that oversees

00:07:07 anything related to AI to confirm that it is does not represent a public safety risk,

00:07:12 just as there is a regulatory authority for the Food and Drug Administration is that’s for

00:07:20 automotive safety, there’s the FAA for aircraft safety, which I really come to the conclusion that

00:07:25 it is important to have a government referee or referee that is serving the public interest

00:07:31 in ensuring that things are safe when when there’s a potential danger to the public.

00:07:37 I would argue that AI is unequivocally something that has potential to be dangerous to the public,

00:07:43 and therefore should have a regulatory agency just as other things that are dangerous to the public

00:07:48 have a regulatory agency. But let me tell you, the problem with this is that the government

00:07:54 moves very slowly. And the rate of the rate, the usually way a regulatory agency comes into being

00:08:01 is that something terrible happens. There’s a huge public outcry. And years after that,

00:08:09 there’s a regulatory agency or a rule put in place, take something like, like seatbelts,

00:08:15 it was known for a decade or more that seatbelts would have a massive impact on safety and save so

00:08:25 many lives in serious injuries. And the car industry fought the requirement to put seatbelts in

00:08:32 tooth and nail. That’s crazy. Yeah. And hundreds of 1000s of people probably died because of that.

00:08:41 And they said people wouldn’t buy cars if they had seatbelts, which is obviously absurd.

00:08:45 Yeah, or look at the tobacco industry and how long they fought any thing about smoking. That’s part

00:08:51 of why I helped make that movie. Thank you for smoking. You can sort of see just how pernicious

00:08:58 it can be when you have these companies effectively achieve regulatory capture of government. The bad

00:09:11 people in the community refer to the advent of digital super intelligence as a singularity.

00:09:17 That is not to say that it is good or bad, but that it is very difficult to predict what will

00:09:23 happen after that point. And then there’s some probability it will be bad, some probably it’ll

00:09:28 be it will be good. We obviously want to affect that probability and have it be more good than bad.

00:09:35 Well, let me on the merger with AI question and the incredible work that’s being done at Neuralink.

00:09:40 There’s a lot of fascinating innovation here across different disciplines going on. So the flexible

00:09:47 wires, the robotic sewing machine, that responsive brain movement, everything around ensuring safety

00:09:52 and so on. So we currently understand very little about the human brain. Do you also hope that the

00:10:02 work at Neuralink will help us understand more about our about our human brain?

00:10:07 Yeah, I think the work in Neuralink will definitely shed a lot of insight into how the brain, the mind

00:10:13 works. Right now, just the data we have regarding how the brain works is very limited. You know,

00:10:20 we’ve got fMRI, which is that that’s kind of like putting us, you know, stethoscope on the outside

00:10:28 of a factory wall and then putting it like all over the factory wall and you can sort of hear

00:10:33 the sounds, but you don’t know what the machines are doing, really. It’s hard. You can infer a few

00:10:38 things, but it’s very broad brushstroke. In order to really know what’s going on in the brain,

00:10:43 you really need you have to have high precision sensors. And then you want to have stimulus and

00:10:47 response. Like if you trigger a neuron, what, how do you feel? What do you see? How does it change

00:10:53 your perception of the world? You’re speaking to physically just getting close to the brain,

00:10:57 being able to measure signals, how do you know what’s going on in the brain?

00:11:00 Physically, just getting close to the brain, being able to measure signals from the brain

00:11:04 will give us sort of open the door inside the factory.

00:11:08 Yes, exactly. Being able to have high precision sensors that tell you what individual neurons

00:11:15 are doing. And then being able to trigger a neuron and see what the response is in the brain.

00:11:22 So you can see the consequences of if you fire this neuron, what happens? How do you feel? What

00:11:28 does it change? It’ll be really profound to have this in people because people can articulate

00:11:35 their change. Like if there’s a change in mood, or if they can tell you if they can see better,

00:11:43 or hear better, or be able to form sentences better or worse, or their memories are jogged,

00:11:51 or that kind of thing. So on the human side, there’s this incredible general malleability,

00:11:56 plasticity of the human brain, the human brain adapts, adjusts, and so on.

00:12:01 So that’s not that plastic, to be totally frank.

00:12:03 So there’s a firm structure, but nevertheless, there’s some plasticity. And the open question is,

00:12:09 sort of, if I could ask a broad question is how much that plasticity can be utilized. Sort of,

00:12:15 on the human side, there’s some plasticity in the human brain. And on the machine side,

00:12:20 we have neural networks, machine learning, artificial intelligence, it’s able to adjust

00:12:26 and figure out signals. So there’s a mysterious language that we don’t perfectly understand

00:12:31 that’s within the human brain. And then we’re trying to understand that language to communicate

00:12:37 both directions. So the brain is adjusting a little bit, we don’t know how much, and the

00:12:42 machine is adjusting. Where do you see, as they try to sort of reach together, almost like with

00:12:48 an alien species, try to find a protocol, communication protocol that works? Where do

00:12:53 you see the biggest, the biggest benefit arriving from on the machine side or the human side? Do you

00:12:59 see both of them working together? I think the machine side is far more malleable than the

00:13:03 biological side, by a huge amount. So it’ll be the machine that adapts to the brain. That’s the only

00:13:12 thing that’s possible. The brain can’t adapt that well to the machine. You can’t have neurons start

00:13:19 to regard an electrode as another neuron, because neurons just, there’s like the pulse. And so

00:13:24 something else is pulsing. So there is that elasticity in the interface, which we believe is

00:13:32 something that can happen. But the vast majority of the malleability will have to be on the machine

00:13:37 side. But it’s interesting, when you look at that synaptic plasticity at the interface side,

00:13:43 there might be like an emergent plasticity. Because it’s a whole nother, it’s not like in the

00:13:48 brain, it’s a whole nother extension of the brain. You know, we might have to redefine what it means

00:13:53 to be malleable for the brain. So maybe the brain is able to adjust to external interfaces. There

00:13:59 will be some adjustments to the brain, because there’s going to be something reading and simulating

00:14:03 the brain. And so it will adjust to that thing. But most, the vast majority of the adjustment

00:14:12 will be on the machine side. This is just, this is just, it has to be that otherwise it will not

00:14:18 work. Ultimately, like, we currently operate on two layers, we have sort of a limbic, like prime

00:14:23 primitive brain layer, which is where all of our kind of impulses are coming from. It’s sort of

00:14:29 like we’ve got, we’ve got like a monkey brain with a computer stuck on it. That’s that’s the

00:14:34 human brain. And a lot of our impulses and everything are driven by the monkey brain.

00:14:39 And the computer, the cortex is constantly trying to make the monkey brain happy.

00:14:44 It’s not the cortex that’s steering the monkey brains, the monkey brain steering the cortex.

00:14:51 You know, the cortex is the part that tells the story of the whole thing. So we convince ourselves

00:14:56 it’s, it’s more interesting than just the monkey brain. The cortex is like what we call like human

00:15:01 intelligence. You know, it’s just like, that’s like the advanced computer relative to other

00:15:05 creatures. The other creatures do not have either. Really, they don’t, they don’t have the

00:15:11 computer, or they have a very weak computer relative to humans. But it’s, it’s like, it sort

00:15:19 of seems like surely the really smart thing should control the dumb thing. But actually,

00:15:24 the dumb thing controls the smart thing. So do you think some of the same kind of machine learning

00:15:30 methods, whether that’s natural language processing applications are going to be applied for the

00:15:35 communication between the machine and the brain to learn how to do certain things like movement

00:15:43 of the body, how to process visual stimuli, and so on. Do you see the value of using machine

00:15:50 learning to understand the language of the two way communication with the brain? Sure. Yeah,

00:15:55 absolutely. I mean, we’re neural net. And that, you know, AI is basically neural net.

00:16:02 So it’s like digital neural net will interface with biological neural net.

00:16:08 And hopefully bring us along for the ride. Yeah. But the vast majority of our intelligence will be

00:16:14 digital. Like, so like, think of like, the difference in intelligence between your cortex

00:16:23 and your limbic system is gigantic, your limbic system really has no comprehension of what the

00:16:29 hell the cortex is doing. It’s just literally hungry, you know, or tired or angry or sexy or

00:16:40 something, you know, that’s just and then that communicates that that impulse to the cortex and

00:16:47 tells the cortex to go satisfy that then love a great deal of like, a massive amount of thinking,

00:16:54 like truly stupendous amount of thinking has gone into sex without purpose, without procreation,

00:17:00 without procreation. Which is actually quite a silly action in the absence of procreation. It’s

00:17:11 a bit silly. Why are you doing it? Because it makes the limbic system happy. That’s why. That’s why.

00:17:17 But it’s pretty absurd, really. Well, the whole of existence is pretty absurd in some kind of sense.

00:17:24 Yeah. But I mean, this is a lot of computation has gone into how can I do more of that with

00:17:32 procreation not even being a factor? This is, I think, a very important area of research by NSFW.

00:17:40 An agency that should receive a lot of funding, especially after this conversation.

00:17:44 I propose the formation of a new agency. Oh, boy.

00:17:48 What is the most exciting or some of the most exciting things that you see in the future impact

00:17:53 of Neuralink, both in the science, the engineering and societal broad impact?

00:17:59 Neuralink, I think, at first will solve a lot of brain related diseases. So it could be anything

00:18:05 from like autism, schizophrenia, memory loss, like everyone experiences memory loss at certain points

00:18:11 in age. Parents can’t remember their kids names and that kind of thing. So it could be anything

00:18:16 from like autism, schizophrenia, memory loss, like everyone experiences memory loss at certain points

00:18:19 in age. Parents can’t remember their kids names and that kind of thing. So there’s a tremendous

00:18:24 amount of good that Neuralink can do in solving critical damage to the brain or the spinal cord.

00:18:34 There’s a lot that can be done to improve quality of life of individuals. And those will be steps

00:18:40 to address the existential risk associated with digital superintelligence. Like we will not be

00:18:48 able to be smarter than a digital supercomputer. So therefore, if you cannot beat them, join them.

00:18:58 And at least we won’t have that option.

00:19:01 So you have hope that Neuralink will be able to be a kind of connection to allow us to merge,

00:19:09 the wave of the improving AI systems. I think the chance is above zero percent.

00:19:15 So it’s non zero. There’s a chance. And that’s what I’ve seen. Dumb and Dumber.

00:19:21 Yes. So I’m saying there’s a chance. He’s saying one in a billion or one in a million,

00:19:26 whatever it was, a dumb and dumber. You know, it went from maybe one in a million to improving.

00:19:31 Maybe it’ll be one in a thousand and then one in a hundred, then one in ten. Depends on the rate

00:19:35 of improvement of Neuralink and how fast we’re able to do make progress.

00:19:41 Well, I’ve talked to a few folks here that are quite brilliant engineers, so I’m excited.

00:19:45 Yeah, I think it’s like fundamentally good, you know,

00:19:48 giving somebody back full motor control after they’ve had a spinal cord injury.

00:19:53 You know, restoring brain functionality after a stroke,

00:19:57 solving debilitating genetically oriented brain diseases. These are all incredibly

00:20:02 great, I think. And in order to do these, you have to be able to interface with neurons at

00:20:07 a detailed level and you need to be able to fire the right neurons, read the right neurons, and

00:20:13 and then effectively you can create a circuit, replace what’s broken with

00:20:19 with silicon and essentially fill in the missing functionality. And then over time,

00:20:26 we can develop a tertiary layer. So if like the limbic system is the primary layer, then the

00:20:31 cortex is like the second layer. And as I said, obviously the cortex is vastly more intelligent

00:20:36 than the limbic system, but people generally like the fact that they have a limbic system

00:20:40 and a cortex. I haven’t met anyone who wants to delete either one of them. They’re like,

00:20:44 okay, I’ll keep them both. That’s cool. The limbic system is kind of fun.

00:20:47 That’s where the fun is, absolutely. And then people generally don’t want to lose their

00:20:53 cortex either. They’re like having the cortex and the limbic system. And then there’s a tertiary

00:20:59 layer, which will be digital superintelligence. And I think there’s room for optimism given that

00:21:05 the cortex, the cortex is very intelligent and limbic system is not, and yet they work together

00:21:11 well. Perhaps there can be a tertiary layer where digital superintelligence lies, and that will be

00:21:18 vastly more intelligent than the cortex, but still coexist peacefully and in a benign manner with the

00:21:24 cortex and limbic system. That’s a super exciting future, both in low level engineering that I saw

00:21:30 as being done here and the actual possibility in the next few decades. It’s important that

00:21:36 Neuralink solve this problem sooner rather than later, because the point at which we have digital

00:21:40 superintelligence, that’s when we pass the singularity and things become just very uncertain.

00:21:45 It doesn’t mean that they’re necessarily bad or good. For the point at which we pass singularity,

00:21:48 things become extremely unstable. So we want to have a human brain interface before the singularity,

00:21:55 or at least not long after it, to minimize existential risk for humanity and consciousness

00:22:01 as we know it. So there’s a lot of fascinating actual engineering, low level problems here at

00:22:07 Neuralink that are quite exciting. The problems that we face in Neuralink are material science,

00:22:15 electrical engineering, software, mechanical engineering, microfabrication. It’s a bunch of

00:22:22 engineering disciplines, essentially. That’s what it comes down to, is you have to have a

00:22:26 tiny electrode, so small it doesn’t hurt neurons, but it’s got to last for as long as a person. So

00:22:35 it’s going to last for decades. And then you’ve got to take that signal, you’ve got to process

00:22:40 that signal locally at low power. So we need a lot of chip design engineers, because we’re going to

00:22:48 do signal processing, and do so in a very power efficient way, so that we don’t heat your brain

00:22:56 up, because the brain is very heat sensitive. And then we’ve got to take those signals and

00:23:01 we’re going to do something with them. And then we’ve got to stimulate the back to bidirectional

00:23:10 communication. So somebody’s good at material science, software, and we’ve got to do a lot of

00:23:15 that. So somebody’s good at material science, software, mechanical engineering, electrical

00:23:20 engineering, chip design, microfabrication. Those are the things we need to work on.

00:23:27 We need to be good at material science, so that we can have tiny electrodes that last a long time.

00:23:32 And it’s a tough thing with the material science problem, it’s a tough one, because

00:23:35 you’re trying to read and simulate electrically in an electrically active area. Your brain is

00:23:43 very electrically active and electrochemically active. So how do you have a coating on the

00:23:49 electrode that doesn’t dissolve over time and is safe in the brain? This is a very hard problem.

00:23:59 And then how do you collect those signals in a way that is most efficient? Because you really

00:24:06 just have very tiny amounts of power to process those signals. And then we need to automate the

00:24:12 whole thing so it’s like LASIK. If this is done by neurosurgeons, there’s no way it can scale to

00:24:20 a large number of people. And it needs to scale to a large number of people, because I think

00:24:24 ultimately we want the future to be determined by a large number of humans. Do you think that

00:24:32 this has a chance to revolutionize surgery period? So neurosurgery and surgery all across?

00:24:39 Yeah, for sure. It’s got to be like LASIK. If LASIK had to be done by hand by a person,

00:24:45 that wouldn’t be great. It’s done by a robot. And the ophthalmologist kind of just needs to make

00:24:54 sure your head’s in the right position, and then they just press a button and go.

00:25:00 SmartSummon and soon Autopark takes on the full beautiful mess of parking lots and their human

00:25:05 to human nonverbal communication. I think it has actually the potential to have a profound impact

00:25:13 in changing how our civilization looks at AI and robotics, because this is the first time human

00:25:19 beings, people that don’t own a Tesla may have never seen a Tesla or heard about a Tesla,

00:25:24 get to watch hundreds of thousands of cars without a driver. Do you see it this way, almost like an

00:25:30 education tool for the world about AI? Do you feel the burden of that, the excitement of that,

00:25:36 or do you just think it’s a smart parking feature? I do think you are getting at something

00:25:42 important, which is most people have never really seen a robot. And what is the car that is

00:25:47 autonomous? It’s a four wheeled robot. Yeah, it communicates a certain sort of message with

00:25:53 everything from safety to the possibility of what AI could bring to its current limitations,

00:25:59 its current challenges, it’s what’s possible. Do you feel the burden of that almost like a

00:26:04 communicator educator to the world about AI? We were just really trying to make people’s

00:26:09 lives easier with autonomy. But now that you mentioned it, I think it will be an eye opener

00:26:15 to people about robotics, because they’ve really never seen most people never seen a robot. And

00:26:20 there are hundreds of thousands of Tesla’s won’t be long before there’s a million of them that

00:26:25 have autonomous capability, and the drive without a person in it. And you can see the kind of

00:26:31 evolution of the car’s personality and, and thinking with each iteration of autopilot,

00:26:40 you can see it’s, it’s uncertain about this, or it gets it, but now it’s more certain. Now it’s

00:26:47 moving in a slightly different way. Like, I can tell immediately if a car is on Tesla autopilot,

00:26:53 because it’s got just little nuances of movement, it just moves in a slightly different way.

00:26:58 Cars on Tesla autopilot, for example, on the highway are far more precise about being in the

00:27:02 center of the lane than a person. If you drive down the highway and look at how at where cars

00:27:08 are, the human driven cars are within their lane, they’re like bumper cars. They’re like moving all

00:27:13 over the place. The car in autopilot, dead center. Yeah, so the incredible work that’s going into

00:27:20 that neural network, it’s learning fast. Autonomy is still very, very hard. We don’t actually know

00:27:27 how hard it is fully, of course. You look at the most problems you tackle, this one included,

00:27:34 with an exponential lens, but even with an exponential improvement, things can take longer

00:27:39 than expected sometimes. So where does Tesla currently stand on its quest for full autonomy?

00:27:47 What’s your sense? When can we see successful deployment of full autonomy?

00:27:55 Well, on the highway already, the the probability of intervention is extremely low.

00:28:00 Yes. So for highway autonomy, with the latest release, especially the probability of needing

00:28:08 to intervene is really quite low. In fact, I’d say for stop and go traffic,

00:28:13 it’s far safer than a person right now. The probability of an injury or impact is much,

00:28:18 much lower for autopilot than a person. And then with navigating autopilot, you can change lanes,

00:28:25 take highway interchanges, and then we’re coming at it from the other direction, which is low speed,

00:28:30 full autonomy. And in a way, this is like, how does a person learn to drive? You learn to drive

00:28:35 in the parking lot. You know, the first time you learn to drive probably wasn’t jumping on

00:28:40 August Street in San Francisco. That’d be crazy. You learn to drive in the parking lot, get things

00:28:45 get things right at low speed. And then the missing piece that we’re working on is traffic

00:28:52 lights and stop streets. Stop streets, I would say actually also relatively easy, because, you know,

00:28:59 you kind of know where the stop street is, worst case in geocoders, and then use visualization to

00:29:04 see where the line is and stop at the line to eliminate the GPS error. So actually, I’d say it’s

00:29:10 probably complex traffic lights and very windy roads are the two things that need to get solved.

00:29:19 What’s harder, perception or control for these problems? So being able to perfectly perceive

00:29:24 everything, or figuring out a plan once you perceive everything, how to interact with all the

00:29:29 agents in the environment in your sense, from a learning perspective, is perception or action

00:29:35 harder? And that giant, beautiful multitask learning neural network, the hottest thing is

00:29:42 having accurate representation of the physical objects in vector space. So transfer taking the

00:29:48 visual input, primarily visual input, some sonar and radar, and and then creating the an accurate

00:29:56 vector space representation of the objects around you. Once you have an accurate vector space

00:30:02 representation, the planning and control is relatively easier. That is relatively easy.

00:30:08 Basically, once you have accurate vector space representation, then you’re kind of like a video

00:30:14 game, like cars and like Grand Theft Auto or something like they work pretty well. They drive

00:30:19 down the road, they don’t crash, you know, pretty much unless you crash into them. That’s because

00:30:24 they’ve they’ve got an accurate vector space representation of where the cars are, and they’re

00:30:27 just and then they’re rendering that as the as the output. Do you have a sense, high level, that

00:30:33 Tesla’s on track on being able to achieve full autonomy? So on the highway? Yeah, absolutely.

00:30:42 And still no driver state, driver sensing? And we have driver sensing with torque on the wheel.

00:30:48 That’s right. Yeah. By the way, just a quick comment on karaoke. Most people think it’s fun,

00:30:55 but I also think it is a driving feature. I’ve been saying for a long time, singing in the car

00:30:59 is really good for attention management and vigilance management. That’s right.

00:31:02 Tesla karaoke is great. It’s one of the most fun features of the car. Do you think of a connection

00:31:08 between fun and safety sometimes? Yeah, you can do both at the same time. That’s great.

00:31:12 I just met with Andrew and wife of Carl Sagan, directed Cosmos. I’m generally a big fan of Carl

00:31:19 Sagan. He’s super cool. And had a great way of putting things. All of our consciousness,

00:31:25 all civilization, everything we’ve ever known and done is on this tiny blue dot.

00:31:29 People also get they get too trapped in there. This is like squabbles amongst humans.

00:31:34 Let’s not think of the big picture. They take civilization and our continued existence for

00:31:39 granted. I shouldn’t do that. Look at the history of civilizations. They rise and they fall. And now

00:31:47 civilization is all it’s globalized. And so civilization, I think now rises and falls together.

00:31:56 There’s no there’s not geographic isolation. This is a big risk. Things don’t always go up. That

00:32:05 should be that’s an important lesson of history. In 1990, at the request of Carl Sagan, the Voyager

00:32:12 One spacecraft, which is a spacecraft that’s reaching out farther than anything human made

00:32:18 into space, turned around to take a picture of Earth from 3.6 billion years ago. And that’s

00:32:24 a picture of Earth from 3.7 billion miles away. And as you’re talking about the pale blue dot,

00:32:31 that picture there takes up less than a single pixel in that image. Yes. Appearing as a tiny

00:32:37 blue dot, as a pale blue dot, as Carl Sagan called it. So he spoke about this dot of ours in 1994.

00:32:46 And if you could humor me, I was wondering if in the last two minutes you could read the words

00:32:54 that he wrote describing this pale blue dot. Sure. Yes, it’s funny. The universe appears to be 13.8

00:33:01 billion years old. Earth is like four and a half billion years old.

00:33:07 In another half billion years or so, the sun will expand and probably evaporate the oceans and make

00:33:14 life impossible on Earth, which means that if it had taken consciousness 10% longer to evolve,

00:33:19 it would never have evolved at all. It’s 10% longer. And I wonder how many dead one planet

00:33:29 civilizations there are out there in the cosmos.

00:33:31 That never made it to the other planet and ultimately extinguished themselves or were destroyed

00:33:35 by external factors. Probably a few. It’s only just possible to travel to Mars. Just barely.

00:33:46 If G was 10% more, it wouldn’t work really.

00:33:50 If G was 10% lower, it would be easy. Like you can go single stage from the surface of Mars all the

00:34:00 way to the surface of the Earth. Because Mars is 37% Earth’s gravity. We need a giant booster

00:34:08 to get off the Earth. Channeling Carl Sagan. Look again at that dot. That’s here. That’s home. That’s us.

00:34:25 On it, everyone you love, everyone you know, everyone you’ve ever heard of, every human being

00:34:30 who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident

00:34:37 religions, ideologies and economic doctrines. Every hunter and farger, every hero and coward,

00:34:42 every creator and destroyer of civilization, every king and peasant, every young couple in love,

00:34:49 every mother and father, hopeful child, inventor and explorer, every teacher of morals, every

00:34:57 corrupt politician, every superstar, every supreme leader, every saint and sinner in the history of

00:35:06 our species lived there on a mode of dust suspended in a sunbeam. Our planet is a lonely speck in the

00:35:13 great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help

00:35:20 will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbor

00:35:25 life. There is nowhere else, at least in the near future, to which our species could migrate. This

00:35:32 is not true. This is false. Mars. And I think Carl Sagan would agree with that. He couldn’t even

00:35:39 imagine it at that time. So thank you for making the world dream. And thank you for talking today.

00:35:45 I really appreciate it. Thank you.