Transcript
00:00:00 The following is a conversation with Elon Musk.
00:00:03 He’s the CEO of Tesla, SpaceX, Neuralink, and a cofounder of several other companies.
00:00:09 This conversation is part of the Artificial Intelligence podcast.
00:00:13 The series includes leading researchers in academia and industry, including CEOs
00:00:18 and CTOs of automotive, robotics, AI, and technology companies.
00:00:24 This conversation happened after the release of the paper from our group at MIT
00:00:28 on Driver Functional Vigilance, during use of Tesla’s Autopilot.
00:00:32 The Tesla team reached out to me offering a podcast conversation with Mr.
00:00:36 Musk.
00:00:37 I accepted, with full control of questions I could ask and the choice
00:00:41 of what is released publicly.
00:00:43 I ended up editing out nothing of substance.
00:00:46 I’ve never spoken with Elon before this conversation, publicly or privately.
00:00:51 Neither he nor his companies have any influence on my opinion, nor on the rigor
00:00:56 and integrity of the scientific method that I practice in my position at MIT.
00:01:01 Tesla has never financially supported my research, and I’ve never owned a Tesla
00:01:05 vehicle, and I’ve never owned Tesla stock.
00:01:09 This podcast is not a scientific paper.
00:01:12 It is a conversation.
00:01:13 I respect Elon as I do all other leaders and engineers I’ve spoken with.
00:01:18 We agree on some things and disagree on others.
00:01:20 My goal is always with these conversations is to understand the way
00:01:24 the guest sees the world.
00:01:26 One particular point of disagreement in this conversation was the extent to
00:01:30 which camera based driver monitoring will improve outcomes and for how long
00:01:36 it will remain relevant for AI assisted driving.
00:01:39 As someone who works on and is fascinated by human centered artificial
00:01:44 intelligence, I believe that if implemented and integrated effectively,
00:01:48 camera based driver monitoring is likely to be of benefit in both the short
00:01:52 term and the long term.
00:01:55 In contrast, Elon and Tesla’s focus is on the improvement of autopilot such
00:02:01 that it’s statistical safety benefits override any concern of human behavior
00:02:06 and psychology.
00:02:09 Elon and I may not agree on everything, but I deeply respect the engineering
00:02:13 and innovation behind the efforts that he leads.
00:02:16 My goal here is to catalyze a rigorous nuanced and objective discussion in
00:02:21 industry and academia on AI assisted driving.
00:02:26 One that ultimately makes for a safer and better world.
00:02:30 And now here’s my conversation with Elon Musk.
00:02:35 What was the vision, the dream of autopilot when, in the beginning, the
00:02:40 big picture system level, when it was first conceived and started being
00:02:44 installed in 2014, the hardware and the cars, what was the vision, the dream?
00:02:48 I wouldn’t characterize the vision or dream, simply that there are obviously
00:02:52 two massive revolutions in, in the automobile industry.
00:02:59 One is the transition to electrification and then the other is autonomy.
00:03:06 And it became obvious to me that in the future, any car that does not have
00:03:14 autonomy would be about as useful as a horse, which is not to say that
00:03:19 there’s no use, it’s just rare and somewhat idiosyncratic if somebody
00:03:23 has a horse at this point.
00:03:24 It’s just obvious that cars will drive themselves completely.
00:03:27 It’s just a question of time.
00:03:29 And if we did not participate in the autonomy revolution, then our cars
00:03:37 would not be useful to people relative to cars that are autonomous.
00:03:42 I mean, an autonomous car is arguably worth five to 10 times more than
00:03:49 a car which is not autonomous.
00:03:52 In the long term.
00:03:53 Turns out what you mean by long term, but let’s say at least for the
00:03:57 next five years, perhaps 10 years.
00:04:00 So there are a lot of very interesting design choices with autopilot early on.
00:04:04 First is showing on the instrument cluster or in the Model 3 on the
00:04:10 center stack display, what the combined sensor suite sees, what was the
00:04:15 thinking behind that choice?
00:04:16 Was there a debate?
00:04:17 What was the process?
00:04:19 The whole point of the display is to provide a health check on the
00:04:25 vehicle’s perception of reality.
00:04:26 So the vehicle’s taking information from a bunch of sensors, primarily
00:04:30 cameras, but also radar and ultrasonics, GPS, and so forth.
00:04:34 And then that, that information is then rendered into vector space and that,
00:04:41 you know, with a bunch of objects with, with properties like lane lines and
00:04:46 traffic lights and other cars.
00:04:48 And then in vector space that is rerendered onto a display.
00:04:53 So you can confirm whether the car knows what’s going on or not
00:04:58 by looking out the window.
00:04:59 Right.
00:05:00 I think that’s an extremely powerful thing for people to get an understanding.
00:05:04 So it become one with the system and understanding what
00:05:06 the system is capable of.
00:05:08 Now, have you considered showing more?
00:05:11 So if we look at the computer vision, you know, like road segmentation,
00:05:16 lane detection, vehicle detection, object detection, underlying the system,
00:05:19 there is at the edges, some uncertainty.
00:05:22 Have you considered revealing the parts that the vehicle is
00:05:28 in, the parts that the, the uncertainty in the system, the sort of probabilities
00:05:34 associated with, with say image recognition or something like that?
00:05:36 Yeah.
00:05:37 So right now it shows like the vehicles in the vicinity, a very clean, crisp image.
00:05:41 And people do confirm that there’s a car in front of me and the system
00:05:45 sees there’s a car in front of me, but to help people build an intuition
00:05:49 of what computer vision is by showing some of the uncertainty.
00:05:53 Well, I think it’s, in my car, I always look at the sort of the debug view.
00:05:57 And there’s, there’s two debug views.
00:05:59 Uh, one is augmented vision, uh, where, which I’m sure you’ve seen where it’s
00:06:04 basically, we draw boxes and labels around objects that are recognized.
00:06:10 And then there’s a work called the visualizer, which is basically vector
00:06:15 space representation, summing up the input from all sensors that doesn’t,
00:06:22 that doesn’t, does not show any pictures, but it shows, uh, all of the, it’s
00:06:28 basically shows the car’s view of, of, of the world in vector space.
00:06:32 Um, but I think this is very difficult for people to know, normal people to
00:06:36 understand, they would not know what they’re looking at.
00:06:39 So it’s almost an HMI challenge to the current things that are being
00:06:42 displayed is optimized for the general public understanding of
00:06:47 what the system is capable of.
00:06:48 It’s like, if you have no idea what, how computer vision works or anything,
00:06:51 you can sort of look at the screen and see if the car knows what’s going on.
00:06:55 And then if you’re, you know, if you’re a development engineer or if you’re,
00:06:59 you know, if you’re, if you have the development build like I do, then you
00:07:02 can see, uh, you know, all the debug information, but those would just be
00:07:07 like total diverse to most people.
00:07:11 What’s your view on how to best distribute effort.
00:07:14 So there’s three, I would say technical aspects of autopilot
00:07:17 that are really important.
00:07:18 So it’s the underlying algorithms, like the neural network architecture,
00:07:22 there’s the data, so that the strain on, and then there’s a hardware development.
00:07:26 There may be others, but so look, algorithm, data, hardware, you don’t, you
00:07:32 only have so much money, only have so much time, what do you think is the most
00:07:35 important thing to, to, uh, allocate resources to, or do you see it as pretty
00:07:40 evenly distributed between those three?
00:07:43 We automatically get a fast amounts of data because all of our cars have eight
00:07:51 external facing cameras and radar, and usually 12 ultrasonic sensors, uh, GPS,
00:07:58 obviously, um, and, uh, IMU.
00:08:02 And so we basically have a fleet that has, uh, and we’ve got about 400,000
00:08:10 cars on the road that have that level of data, I think you keep quite
00:08:13 close track of it actually.
00:08:14 Yes.
00:08:15 Yeah.
00:08:15 So we’re, we’re approaching half a million cars on the road that have the full sensor
00:08:20 suite.
00:08:21 Um, so this is, I’m, I’m not sure how many other cars on the road have the sensor
00:08:27 suite, but I would be surprised if it’s more than 5,000, which means that we
00:08:32 have 99% of all the data.
00:08:35 So there’s this huge inflow of data.
00:08:37 Absolutely.
00:08:37 Massive inflow of data, and then we, it’s, it’s taken us about three years, but now
00:08:43 we’ve finally developed our full self driving computer, which can process, uh,
00:08:51 and in order of magnitude as much as the Nvidia system that we currently have in
00:08:54 the, in the cars, and it’s really just a, to use it, you’ve unplugged the Nvidia
00:08:59 computer and plug the Tesla computer in and that’s it.
00:09:01 And it’s, it’s, uh, in fact, we’re not even, we’re still exploring the boundaries
00:09:06 of capabilities, uh, but we’re able to run the cameras at full frame rate, full
00:09:10 resolution, uh, not even crop the images and it’s still got headroom even on one
00:09:16 of the systems, the harder full self driving computer is really two computers,
00:09:21 two systems on a chip that are fully redundant.
00:09:23 So you could put a bolt through basically any part of that system and it still
00:09:27 works.
00:09:27 The redundancy, are they perfect copies of each other or also it’s purely for
00:09:33 redundancy as opposed to an argue machine kind of architecture where they’re both
00:09:37 making decisions.
00:09:37 This is purely for redundancy.
00:09:39 I think it would more like it’s, if you have a twin engine aircraft, uh, commercial
00:09:43 aircraft, the system will operate best if both systems are operating, but it’s,
00:09:51 it’s capable of operating safely on one.
00:09:53 So, but as it is right now, we can just run, we’re, we haven’t even hit the, the,
00:09:59 the edge of performance.
00:10:01 So there’s no need to actually distribute functionality across both SOCs.
00:10:10 We can actually just run a full duplicate on, on, on each one.
00:10:13 Do you haven’t really explored or hit the limit of this?
00:10:17 Not yet at the limiter.
00:10:18 So the magic of deep learning is that it gets better with data.
00:10:22 You said there’s a huge inflow of data, but the thing about driving the really
00:10:28 valuable data to learn from is the edge cases.
00:10:32 So how do you, I mean, I’ve, I’ve heard you talk somewhere about, uh, autopilot
00:10:39 disengagements being an important moment of time to use.
00:10:42 Is there other edge cases where you can, you know, you can, you can, you can
00:10:46 drive, is there other edge cases or perhaps can you speak to those edge cases?
00:10:53 What aspects of that might be valuable or if you have other ideas, how to
00:10:56 discover more and more and more edge cases in driving?
00:11:00 Well, there’s a lot of things that are learned.
00:11:02 There are certainly edge cases where I say somebody is on autopilot and they,
00:11:06 they take over and then, okay, that, that, that, that’s a trigger that goes to our
00:11:12 system that says, okay, did they take over for convenience or do they take
00:11:16 over because the autopilot wasn’t working properly.
00:11:19 There’s also like, let’s say we’re, we’re trying to figure out what is the optimal
00:11:23 spline for traversing an intersection.
00:11:27 Um, then then the ones where there are no interventions and are the right ones.
00:11:33 So you then say, okay, when it looks like this, do the following.
00:11:38 And then, and then you get the optimal spline for a complex, uh,
00:11:42 navigating a complex, uh, intersection.
00:11:44 So that’s for this.
00:11:46 So there’s kind of the common case you’re trying to, uh, capture a huge amount of
00:11:50 samples of a particular intersection, how, when things went right, and then
00:11:54 there’s the edge case where, uh, as you said, not for convenience, but
00:11:59 something didn’t go exactly right.
00:12:01 Somebody took over, somebody asserted manual control from autopilot.
00:12:05 And really like the way to look at this as view all input is error.
00:12:08 If the user had to do input, it does something all input is error.
00:12:12 That’s a powerful line.
00:12:13 That’s a powerful line to think of it that way, because they may very well be
00:12:17 error, but if you want to exit the highway, or if you want to, uh, it’s
00:12:21 a navigation decision that all autopilot is not currently designed to do.
00:12:25 Then the driver takes over.
00:12:27 How do you know the difference?
00:12:28 That’s going to change with navigate an autopilot, which we were just
00:12:31 released and without still confirm.
00:12:33 So the navigation, like lane change based, like a certain control in
00:12:38 order to change, do a lane change or exit a freeway or, or doing a highway
00:12:42 under change, the vast majority of that will go away with, um, the
00:12:47 release that just went out.
00:12:48 Yeah.
00:12:49 So that, that I don’t think people quite understand how big of a step that is.
00:12:54 Yeah, they don’t.
00:12:55 So if you drive the car, then you do.
00:12:58 So you still have to keep your hands on the steering wheel currently when
00:13:00 it does the automatic lane change.
00:13:03 What are, so there’s, there’s these big leaps through the development of
00:13:07 autopilot through its history and what stands out to you as the big leaps?
00:13:13 I would say this one, navigate an autopilot without, uh, confirm
00:13:18 without having to confirm is a huge leap.
00:13:21 It is a huge leap.
00:13:22 It also automatically overtakes low cars.
00:13:24 So it’s, it’s both navigation, um, and seeking the fastest lane.
00:13:31 So it’ll, it’ll, it’ll, you know, overtake a slow cause, um, and exit the
00:13:36 freeway and take highway interchanges.
00:13:38 And, and then, uh, we have, uh, traffic lights, uh, recognition, which
00:13:47 introduced initially as a, as a warning.
00:13:50 I mean, on the development version that I’m driving, the car fully, fully
00:13:53 stops and goes at traffic lights.
00:13:56 So those are the steps, right?
00:13:58 You’ve just mentioned something sort of inkling a step towards full autonomy.
00:14:02 What would you say are the biggest technological roadblocks
00:14:06 to full self driving?
00:14:08 Actually, I don’t think, I think we just, the full self driving computer that we
00:14:11 just, uh, that the Tesla, what we call the FSD computer, uh, that that’s now in
00:14:17 production.
00:14:20 Uh, so if you order, uh, any model SRX or any model three that has the full self
00:14:26 driving package, you’ll get the FSD computer.
00:14:29 That, that was, that’s important to have enough, uh, base computation, uh, then
00:14:34 refining the neural net and the control software, uh, which, but all of that can
00:14:39 just be provided as an over there update.
00:14:42 The thing that’s really profound and where I’ll be emphasizing at the, uh, sort
00:14:47 of what that investor day that we’re having focused on autonomy is that the
00:14:51 cars currently being produced with the hardware currently being produced is
00:14:55 capable of full self driving, but capable is an interesting word because, um, like
00:15:01 the hardware is, and as we refine the software, the capabilities will increase
00:15:07 dramatically, um, and then the reliability will increase dramatically, and then it
00:15:11 will receive regulatory approval.
00:15:13 So essentially buying a car today is an investment in the future.
00:15:16 You’re essentially buying a car, you’re buying the, I think the most profound
00:15:21 thing is that if you buy a Tesla today, I believe you are buying an appreciating
00:15:26 asset, not a depreciating asset.
00:15:30 So that’s a really important statement there because if hardware is capable
00:15:33 enough, that’s the hard thing to upgrade usually.
00:15:37 Exactly.
00:15:37 So then the rest is a software problem.
00:15:40 Yes.
00:15:41 Software has no marginal cost really.
00:15:44 But what’s your intuition on the software side?
00:15:48 How hard are the remaining steps to, to get it to where, um, you know, uh, the,
00:15:57 the experience, uh, not just the safety, but the full experience is something
00:16:03 that people would, uh, enjoy.
00:16:06 Well, I think people enjoy it very much so on, on, on the highways.
00:16:09 It’s, it’s a total game changer for quality of life for using, you know,
00:16:15 Tesla autopilot on the highways, uh, so it’s really just extending that
00:16:19 functionality to city streets, adding in the traffic light recognition, uh,
00:16:26 navigating complex intersections and, um, and then, uh, being able to navigate
00:16:32 complicated parking lots so the car can, uh, exit a parking space and come and
00:16:37 find you, even if it’s in a complete maze of a parking lot, um, and, and, and,
00:16:43 and then if, and then you can just, it can just drop you off and find a
00:16:46 parking spot by itself.
00:16:48 Yeah.
00:16:49 In terms of enjoyability and something that people would, uh, would actually
00:16:53 find a lot of use from the parking lot is a, is a really, you know, it’s, it’s
00:16:58 rich of annoyance when you have to do it manually.
00:17:00 So there’s a lot of benefit to be gained from automation there.
00:17:04 So let me start injecting the human into this discussion a little bit.
00:17:08 Uh, so let’s talk about, uh, the, the, the, the, the, the, the, the, the, the,
00:17:13 about full autonomy.
00:17:15 If you look at the current level four vehicles being tested on
00:17:18 road, like Waymo and so on, they’re only technically autonomous.
00:17:23 They’re really level two systems with just the different design philosophy,
00:17:28 because there’s always a safety driver in almost all cases and
00:17:31 they’re monitoring the system.
00:17:33 Right.
00:17:33 Do you see Tesla’s full self driving as still for a time to come requiring
00:17:42 supervision of the human being.
00:17:44 So it’s capabilities are powerful enough to drive, but nevertheless requires
00:17:48 the human to still be supervising, just like a safety driver is in a
00:17:54 other fully autonomous vehicles.
00:17:57 I think it will require detecting hands on wheel for at least, uh, six months
00:18:05 or something like that from here.
00:18:07 It really is a question of like, from a regulatory standpoint, uh, what, how much
00:18:15 safer than a person does autopilot need to be for it to be okay to not monitor
00:18:20 the car, you know, and, and this is a debate that one can have it.
00:18:25 And then if you, but you need, you know, a large sample, a large amount of data.
00:18:30 Um, so you can prove with high confidence, statistically speaking, that the car is
00:18:36 dramatically safer than a person, um, and that adding in the person monitoring
00:18:40 does not materially affect the safety.
00:18:44 So it might need to be like two or 300% safer than a person.
00:18:48 And how do you prove that incidents per mile incidents per mile crashes and
00:18:53 fatalities, fatalities would be a factor, but there, there are just not enough
00:18:58 fatalities to be statistically significant at scale, but there are enough.
00:19:03 Crashes, you know, there are far more crashes than there are fatalities.
00:19:08 So you can assess what is the probability of a crash that then there’s another step
00:19:14 which probability of injury and probability of permanent injury, the
00:19:19 probability of death, and all of those need to be a much better than a person,
00:19:24 uh, by at least perhaps 200%.
00:19:28 And you think there’s, uh, the ability to have a healthy discourse with the
00:19:33 regulatory bodies on this topic?
00:19:36 I mean, there’s no question that, um, but, um, regulators pay just disproportionate
00:19:41 amount of attention to that, which generates press.
00:19:44 This is just an objective fact.
00:19:46 Um, and Tesla generates a lot of press.
00:19:49 So the, you know, in the United States, this, I think almost, you know,
00:19:55 uh, in the United States, this, I think almost 40,000 automotive deaths per year.
00:20:01 Uh, but if there are four in Tesla, they’ll probably receive a thousand
00:20:06 times more press than anyone else.
00:20:08 So the, the psychology of that is actually fascinating.
00:20:11 I don’t think we’ll have enough time to talk about that, but I have to talk to
00:20:14 you about the human side of things.
00:20:16 So myself and our team at MIT recently released the paper on functional
00:20:21 vigilance of drivers while using autopilot.
00:20:23 This is work we’ve been doing since autopilot was first released publicly
00:20:28 over three years ago, collecting video of driver faces and driver body.
00:20:34 So I saw that you tweeted a quote from the abstract, so I can at least, uh,
00:20:40 guess that you’ve glanced at it.
00:20:42 Yeah, I read it.
00:20:43 Can I talk you through what we found?
00:20:45 Sure.
00:20:46 Okay.
00:20:46 So it appears that in the data that we’ve collected, that drivers are maintaining
00:20:53 functional vigilance such that we’re looking at 18,000 disengagement from
00:20:57 autopilot, 18,900 and annotating, were they able to take over control in a timely
00:21:04 manner?
00:21:05 So they were there present looking at the road, uh, to take over control.
00:21:09 Okay.
00:21:09 So this, uh, goes against what, what many would predict from the body of literature
00:21:15 on vigilance with automation.
00:21:18 Now, the question is, do you think these results hold across the broader
00:21:22 population?
00:21:23 So ours is just a small subset.
00:21:25 Do you think, uh, one of the criticism is that, you know, there’s a small
00:21:30 minority of drivers that may be highly responsible where their vigilance
00:21:35 decrement would increase with autopilot use?
00:21:38 I think this is all really going to be swept.
00:21:40 I mean, the system’s improving so much, so fast that this is going to be a mood
00:21:46 point very soon where vigilance is like, if something’s many times safer than a
00:21:55 person, then adding a person, uh, does the, the, the effect on safety is, is
00:22:01 limited.
00:22:02 Um, and in fact, uh, it could be negative.
00:22:09 That’s really interesting.
00:22:10 So the, uh, the, so the fact that a human may, some percent of the population may,
00:22:16 uh, exhibit a vigilance decrement will not affect overall statistics numbers of
00:22:20 safety.
00:22:21 No, in fact, I think it will become, uh, very, very quickly, maybe even towards
00:22:27 the end of this year, but I’d say I’d be shocked if it’s not next year.
00:22:30 At the latest, that, um, having the person, having a human intervene will
00:22:35 decrease safety decrease.
00:22:38 It’s like, imagine if you’re an elevator and it used to be that there were
00:22:42 elevator operators, um, and, and you couldn’t go on an elevator by yourself
00:22:46 and work the lever to move between floors.
00:22:49 Um, and now, uh, nobody wants it an elevator operator because the automated
00:22:56 elevator that stops the floors is much safer than the elevator operator.
00:23:01 And in fact, it would be quite dangerous to have someone with a lever that can
00:23:05 move the elevator between floors.
00:23:07 So that’s a, that’s a really powerful statement and really interesting one.
00:23:12 But I also have to ask from a user experience and from a safety perspective,
00:23:16 one of the passions for me algorithmically is a camera based detection of, uh,
00:23:22 of just sensing the human, but detecting what the driver is looking at, cognitive
00:23:26 load, body pose on the computer vision side, that’s a fascinating problem.
00:23:30 But do you, and there’s many in industry believe you have to have
00:23:33 camera based driver monitoring.
00:23:35 Do you think there could be benefit gained from driver monitoring?
00:23:39 If you have a system that’s, that’s at, that’s at or below a human level
00:23:44 reliability, then driver monitoring makes sense.
00:23:48 But if your system is dramatically better, more likely to be
00:23:51 better, more liable than, than a human, then drive monitoring monitoring
00:23:55 is not just not help much.
00:23:59 And, uh, like I said, you, you, just like, as an, you wouldn’t want someone
00:24:03 into like, you wouldn’t want someone in the elevator, if you’re in an elevator,
00:24:06 do you really want someone with a big lever, some, some random person
00:24:09 operating the elevator between floors?
00:24:12 I wouldn’t trust that or rather have the buttons.
00:24:17 Okay.
00:24:17 You’re optimistic about the pace of improvement of the system that from
00:24:21 what you’ve seen with the full self driving car computer, the rate
00:24:25 of improvement is exponential.
00:24:28 So one of the other very interesting design choices early on that connects
00:24:32 to this is the operational design domain of autopilot.
00:24:38 So where autopilot is able to be turned on the, so contrast another vehicle
00:24:44 system that we’re studying is the Cadillac SuperCrew system.
00:24:48 That’s in terms of ODD, very constrained to particular kinds of highways, well
00:24:53 mapped, tested, but it’s much narrower than the ODD of Tesla vehicles.
00:24:58 What’s there’s, there’s pros and…
00:25:00 It’s like ADD.
00:25:02 Yeah.
00:25:04 That’s good.
00:25:04 That’s a, that’s a good line.
00:25:06 Uh, what was the design decision, uh, what, in that different philosophy
00:25:13 of thinking where there’s pros and cons, what we see with, uh, a wide ODD
00:25:18 is drive Tesla drivers are able to explore more the limitations of the
00:25:22 system, at least early on, and they understand together with the instrument
00:25:26 cluster display, they start to understand what are the capabilities.
00:25:30 So that’s a benefit.
00:25:31 The con is you go, you’re letting drivers use it basically anywhere.
00:25:38 So anyway, that could detect lanes with confidence.
00:25:41 Was there a philosophy, uh, design decisions that were challenging
00:25:46 that were being made there or from the very beginning, was that, uh,
00:25:51 done on purpose with intent?
00:25:54 Well, I mean, I think it’s frankly, it’s pretty crazy giving it, letting people
00:25:57 drive a two ton death machine manually.
00:26:01 Uh, that’s crazy.
00:26:03 Like, like in the future of people who are like, I can’t believe anyone was
00:26:07 just allowed to drive for one of these two ton death machines and they
00:26:12 just drive wherever they wanted.
00:26:14 Just like elevators.
00:26:14 He was like, move the elevator with that lever, wherever you want.
00:26:17 It can stop at halfway between floors if you want.
00:26:22 It’s pretty crazy.
00:26:24 So it’s going to seem like a mad thing in the future that people were driving cars.
00:26:32 So I have a bunch of questions about the human psychology, about behavior and so
00:26:36 on that would become that because, uh, you have faith in the AI system, uh, not
00:26:46 faith, but, uh, the, both on the hardware side and the deep learning approach of
00:26:51 learning from data will make it just far safer than humans.
00:26:55 Yeah, exactly.
00:26:56 Recently, there are a few hackers who, uh, tricked autopilot to act in
00:27:00 unexpected ways with adversarial examples.
00:27:03 So we all know that neural network systems are very sensitive to minor
00:27:06 disturbances to these adversarial examples on input.
00:27:10 Do you think it’s possible to defend against something like this for the
00:27:13 broader, for the industry?
00:27:15 Sure.
00:27:15 So can you elaborate on the, on the confidence behind that answer?
00:27:22 Um, well the, you know, neural net is just like a basic bunch of matrix math.
00:27:27 Or you have to be like a very sophisticated, somebody who really
00:27:31 understands neural nets and like basically reverse engineer how the matrix
00:27:36 is being built and then create a little thing that’s just exactly, um, causes
00:27:42 the matrix math to be slightly off.
00:27:44 But it’s very easy to then block it, block that by, by having basically
00:27:49 anti negative recognition.
00:27:51 It’s like if you, if the system sees something that looks like a matrix hack,
00:27:55 uh, exclude it, so it’s such an easy thing to do.
00:28:01 So learn both on the, the valid data and the invalid data.
00:28:05 So basically learn on the adversarial examples to be able to exclude them.
00:28:08 Yeah.
00:28:09 Like you basically want to both know what is, what is a car and
00:28:13 what is definitely not a car.
00:28:15 And you train for this is a car and this is definitely not a car.
00:28:18 Those are two different things.
00:28:20 People have no idea neural nets really.
00:28:23 They probably think neural nets are both like, you know, fishing net only.
00:28:28 So as you know, so taking a step beyond just Tesla and autopilot, uh, current
00:28:36 deep learning approaches still seem in some ways to be far from general
00:28:42 intelligence systems.
00:28:43 Do you think the current approaches will take us to general intelligence or do
00:28:49 totally new ideas need to be invented?
00:28:54 I think we’re missing a few key ideas for general intelligence, general artificial
00:28:59 general intelligence, but it’s going to be upon us very quickly.
00:29:07 And then we’ll need to figure out what shall we do if we even have that choice?
00:29:14 But it’s amazing how people can’t differentiate between say the narrow
00:29:18 AI that, you know, allows a car to figure out what a lane line is and, and, and,
00:29:24 you know, and navigate streets versus general intelligence.
00:29:29 Like these are just very different things.
00:29:32 Like your toaster and your computer are both machines, but one’s much
00:29:35 more sophisticated than another.
00:29:37 You’re confident with Tesla.
00:29:39 You can create the world’s best toaster.
00:29:42 The world’s best toaster.
00:29:43 Yes.
00:29:43 The world’s best toaster. Yes. The world’s best self driving. I’m, I, yes.
00:29:52 To me right now, this seems game set match.
00:29:54 I don’t, I mean, that sounds, I don’t want to be complacent or overconfident,
00:29:57 but that’s what it appears.
00:29:58 That is just literally what it, how it appears right now.
00:30:02 I could be wrong, but it appears to be the case that Tesla is vastly ahead of
00:30:08 everyone.
00:30:09 Do you think we will ever create an AI system that we can love and loves us back
00:30:14 in a deep, meaningful way?
00:30:15 Like in the movie, her, I think AI will be capable of convincing you to fall in
00:30:22 love with it very well.
00:30:24 And that’s different than us humans.
00:30:27 You know, we start getting into a metaphysical question of like, do emotions
00:30:31 and thoughts exist in a different realm than the physical?
00:30:34 And maybe they do.
00:30:35 Maybe they don’t.
00:30:35 I don’t know.
00:30:36 But from a physics standpoint, I tend to think of things, you know, like physics
00:30:42 was my main sort of training and from a physics standpoint, essentially, if it
00:30:50 loves you in a way that is, that you can’t tell whether it’s real or not, it is
00:30:53 real.
00:30:55 That’s a physics view of love.
00:30:57 Yeah.
00:30:59 If there’s no, if you cannot just, if you cannot prove that it does not, if there’s
00:31:04 no, if there’s no test that you can apply that would make it, allow you to tell the
00:31:14 difference, then there is no difference.
00:31:17 Right.
00:31:17 And it’s similar to seeing our world as simulation.
00:31:21 There may not be a test to tell the difference between what the real world
00:31:24 and the simulation, and therefore from a physics perspective, it might as well be
00:31:28 the same thing.
00:31:29 Yes.
00:31:30 And there may be ways to test whether it’s a simulation.
00:31:33 There might be, I’m not saying there aren’t, but you could certainly imagine
00:31:36 that a simulation could correct that once an entity in the simulation found a way
00:31:40 to detect the simulation, it could either restart, you know, pause the simulation,
00:31:46 start a new simulation, or do one of many other things that then corrects for that
00:31:50 error.
00:31:52 So when maybe you or somebody else creates an AGI system and you get to ask
00:32:00 her one question, what would that question be?
00:32:16 What’s outside the simulation?
00:32:20 Elon, thank you so much for talking today.
00:32:22 It was a pleasure.
00:32:23 All right.
00:32:24 Thank you.