Transcript
00:00:00 The following is a conversation with Vijay Kumar.
00:00:03 He’s one of the top roboticists in the world,
00:00:05 a professor at the University of Pennsylvania,
00:00:08 a dean of pen engineering, former director of Grasp Lab,
00:00:12 or the General Robotics Automation Sensing
00:00:15 and Perception Laboratory at Penn,
00:00:17 that was established back in 1979, that’s 40 years ago.
00:00:22 Vijay is perhaps best known for his work
00:00:25 in multi robot systems, robot swarms,
00:00:28 and micro aerial vehicles,
00:00:30 robots that elegantly cooperate in flight
00:00:34 under all the uncertainty and challenges
00:00:36 that the real world conditions present.
00:00:38 This is the Artificial Intelligence Podcast.
00:00:41 If you enjoy it, subscribe on YouTube,
00:00:44 give it five stars on iTunes, support on Patreon,
00:00:47 or simply connect with me on Twitter
00:00:49 at Lex Friedman, spelled F R I D M A N.
00:00:53 And now, here’s my conversation with Vijay Kumar.
00:00:58 What is the first robot you’ve ever built
00:01:01 or were a part of building?
00:01:02 Way back when I was in graduate school,
00:01:04 I was part of a fairly big project
00:01:06 that involved building a very large hexapod.
00:01:12 It’s weighed close to 7,000 pounds,
00:01:17 and it was powered by hydraulic actuation,
00:01:21 or it was actuated by hydraulics with 18 motors,
00:01:27 hydraulic motors, each controlled by an Intel 8085 processor
00:01:34 and an 8086 co processor.
00:01:38 And so imagine this huge monster that had 18 joints,
00:01:44 each controlled by an independent computer,
00:01:46 and there was a 19th computer that actually did
00:01:49 the coordination between these 18 joints.
00:01:52 So I was part of this project,
00:01:53 and my thesis work was how do you coordinate the 18 legs?
00:02:02 And in particular, the pressures in the hydraulic cylinders
00:02:06 to get efficient locomotion.
00:02:09 It sounds like a giant mess.
00:02:11 So how difficult is it to make all the motors communicate?
00:02:14 Presumably, you have to send signals hundreds of times
00:02:17 a second, or at least.
00:02:18 So this was not my work,
00:02:19 but the folks who worked on this wrote what I believe
00:02:23 to be the first multiprocessor operating system.
00:02:26 This was in the 80s, and you had to make sure
00:02:30 that obviously messages got across
00:02:32 from one joint to another.
00:02:34 You have to remember the clock speeds on those computers
00:02:37 were about half a megahertz.
00:02:39 Right, the 80s.
00:02:42 So not to romanticize the notion,
00:02:45 but how did it make you feel to see that robot move?
00:02:51 It was amazing.
00:02:52 In hindsight, it looks like, well, we built this thing
00:02:55 which really should have been much smaller.
00:02:57 And of course, today’s robots are much smaller.
00:02:59 You look at Boston Dynamics or Ghost Robotics,
00:03:03 a spinoff from Penn.
00:03:06 But back then, you were stuck with the substrate you had,
00:03:10 the compute you had, so things were unnecessarily big.
00:03:13 But at the same time, and this is just human psychology,
00:03:18 somehow bigger means grander.
00:03:21 People never had the same appreciation
00:03:23 for nanotechnology or nanodevices
00:03:26 as they do for the Space Shuttle or the Boeing 747.
00:03:30 Yeah, you’ve actually done quite a good job
00:03:32 at illustrating that small is beautiful
00:03:36 in terms of robotics.
00:03:37 So what is on that topic is the most beautiful
00:03:42 or elegant robot in motion that you’ve ever seen?
00:03:46 Not to pick favorites or whatever,
00:03:47 but something that just inspires you that you remember.
00:03:51 Well, I think the thing that I’m most proud of
00:03:54 that my students have done is really think about
00:03:57 small UAVs that can maneuver in constrained spaces
00:04:00 and in particular, their ability to coordinate
00:04:03 with each other and form three dimensional patterns.
00:04:06 So once you can do that,
00:04:08 you can essentially create 3D objects in the sky
00:04:14 and you can deform these objects on the fly.
00:04:17 So in some sense, your toolbox of what you can create
00:04:21 has suddenly got enhanced.
00:04:25 And before that, we did the two dimensional version of this.
00:04:27 So we had ground robots forming patterns and so on.
00:04:31 So that was not as impressive, that was not as beautiful.
00:04:34 But if you do it in 3D,
00:04:36 suspended in midair, and you’ve got to go back to 2011
00:04:40 when we did this, now it’s actually pretty standard
00:04:43 to do these things eight years later.
00:04:45 But back then it was a big accomplishment.
00:04:47 So the distributed cooperation
00:04:50 is where beauty emerges in your eyes?
00:04:53 Well, I think beauty to an engineer is very different
00:04:55 from beauty to someone who’s looking at robots
00:04:59 from the outside, if you will.
00:05:01 But what I meant there, so before we said that grand,
00:05:04 so before we said that grand is associated with size.
00:05:10 And another way of thinking about this
00:05:13 is just the physical shape
00:05:15 and the idea that you can get physical shapes in midair
00:05:18 and have them deform, that’s beautiful.
00:05:21 But the individual components,
00:05:23 the agility is beautiful too, right?
00:05:24 That is true too.
00:05:25 So then how quickly can you actually manipulate
00:05:28 these three dimensional shapes
00:05:29 and the individual components?
00:05:31 Yes, you’re right.
00:05:32 But by the way, you said UAV, unmanned aerial vehicle.
00:05:36 What’s a good term for drones, UAVs, quad copters?
00:05:41 Is there a term that’s being standardized?
00:05:44 I don’t know if there is.
00:05:45 Everybody wants to use the word drones.
00:05:47 And I’ve often said this, drones to me is a pejorative word.
00:05:51 It signifies something that’s dumb,
00:05:53 that’s pre programmed, that does one little thing
00:05:56 and robots are anything but drones.
00:05:58 So I actually don’t like that word,
00:06:00 but that’s what everybody uses.
00:06:02 You could call it unpiloted.
00:06:04 Unpiloted.
00:06:05 But even unpiloted could be radio controlled,
00:06:08 could be remotely controlled in many different ways.
00:06:11 And I think the right word is,
00:06:12 thinking about it as an aerial robot.
00:06:15 You also say agile, autonomous, aerial robot, right?
00:06:19 Yeah, so agility is an attribute, but they don’t have to be.
00:06:23 So what biological system,
00:06:24 because you’ve also drawn a lot of inspiration with those.
00:06:27 I’ve seen bees and ants that you’ve talked about.
00:06:30 What living creatures have you found to be most inspiring
00:06:35 as an engineer, instructive in your work in robotics?
00:06:38 To me, so ants are really quite incredible creatures, right?
00:06:43 So you, I mean, the individuals arguably are very simple
00:06:47 in how they’re built and yet they’re incredibly resilient
00:06:52 as a population.
00:06:53 And as individuals, they’re incredibly robust.
00:06:56 So, if you take an ant, it’s six legs,
00:07:00 you remove one leg, it still works just fine.
00:07:04 And it moves along.
00:07:05 And I don’t know that he even realizes it’s lost a leg.
00:07:09 So that’s the robustness at the individual ant level.
00:07:13 But then you look about this instinct
00:07:15 for self preservation of the colonies
00:07:17 and they adapt in so many amazing ways.
00:07:20 You know, transcending gaps by just chaining themselves
00:07:26 together when you have a flood,
00:07:29 being able to recruit other teammates
00:07:32 to carry big morsels of food,
00:07:35 and then going out in different directions looking for food,
00:07:38 and then being able to demonstrate consensus,
00:07:43 even though they don’t communicate directly with each other
00:07:47 the way we communicate with each other.
00:07:49 In some sense, they also know how to do democracy,
00:07:51 probably better than what we do.
00:07:53 Yeah, somehow it’s even democracy is emergent.
00:07:57 It seems like all of the phenomena that we see
00:07:59 is all emergent.
00:08:00 It seems like there’s no centralized communicator.
00:08:03 There is, so I think a lot is made about that word,
00:08:06 emergent, and it means lots of things to different people.
00:08:09 But you’re absolutely right.
00:08:10 I think as an engineer, you think about
00:08:13 what element, elemental behaviors
00:08:17 were primitives you could synthesize
00:08:21 so that the whole looks incredibly powerful,
00:08:25 incredibly synergistic,
00:08:26 the whole definitely being greater than some of the parts,
00:08:29 and ants are living proof of that.
00:08:32 So when you see these beautiful swarms
00:08:34 where there’s biological systems of robots,
00:08:38 do you sometimes think of them
00:08:40 as a single individual living intelligent organism?
00:08:44 So it’s the same as thinking of our human beings
00:08:47 are human civilization as one organism,
00:08:51 or do you still, as an engineer,
00:08:52 think about the individual components
00:08:54 and all the engineering
00:08:55 that went into the individual components?
00:08:57 Well, that’s very interesting.
00:08:58 So again, philosophically as engineers,
00:09:01 what we wanna do is to go beyond
00:09:05 the individual components, the individual units,
00:09:08 and think about it as a unit, as a cohesive unit,
00:09:11 without worrying about the individual components.
00:09:15 If you start obsessing about
00:09:17 the individual building blocks and what they do,
00:09:23 you inevitably will find it hard to scale up.
00:09:27 Just mathematically,
00:09:29 just think about individual things you wanna model,
00:09:31 and if you want to have 10 of those,
00:09:34 then you essentially are taking Cartesian products
00:09:36 of 10 things, and that makes it really complicated.
00:09:39 Then to do any kind of synthesis or design
00:09:41 in that high dimension space is really hard.
00:09:44 So the right way to do this
00:09:45 is to think about the individuals in a clever way
00:09:49 so that at the higher level,
00:09:51 when you look at lots and lots of them,
00:09:53 abstractly, you can think of them
00:09:55 in some low dimensional space.
00:09:57 So what does that involve?
00:09:58 For the individual, do you have to try to make
00:10:02 the way they see the world as local as possible?
00:10:05 And the other thing,
00:10:06 do you just have to make them robust to collisions?
00:10:09 Like you said with the ants,
00:10:10 if something fails, the whole swarm doesn’t fail.
00:10:15 Right, I think as engineers, we do this.
00:10:17 I mean, you think about, we build planes,
00:10:19 or we build iPhones,
00:10:22 and we know that by taking individual components,
00:10:26 well engineered components with well specified interfaces
00:10:30 that behave in a predictable way,
00:10:31 you can build complex systems.
00:10:34 So that’s ingrained, I would claim,
00:10:36 in most engineers thinking,
00:10:39 and it’s true for computer scientists as well.
00:10:41 I think what’s different here is that you want
00:10:44 the individuals to be robust in some sense,
00:10:49 as we do in these other settings,
00:10:52 but you also want some degree of resiliency
00:10:54 for the population.
00:10:56 And so you really want them to be able to reestablish
00:11:02 communication with their neighbors.
00:11:03 You want them to rethink their strategy for group behavior.
00:11:08 You want them to reorganize.
00:11:12 And that’s where I think a lot of the challenges lie.
00:11:15 So just at a high level,
00:11:18 what does it take for a bunch of,
00:11:22 what should we call them, flying robots,
00:11:24 to create a formation?
00:11:26 Just for people who are not familiar
00:11:28 with robotics in general, how much information is needed?
00:11:32 How do you even make it happen
00:11:35 without a centralized controller?
00:11:39 So, I mean, there are a couple of different ways
00:11:41 of looking at this.
00:11:43 If you are a purist,
00:11:45 you think of it as a way of recreating what nature does.
00:11:53 So nature forms groups for several reasons,
00:11:58 but mostly it’s because of this instinct
00:12:02 that organisms have of preserving their colonies,
00:12:05 their population, which means what?
00:12:09 You need shelter, you need food, you need to procreate,
00:12:12 and that’s basically it.
00:12:14 So the kinds of interactions you see are all organic.
00:12:18 They’re all local.
00:12:20 And the only information that they share,
00:12:24 and mostly it’s indirectly, is to, again,
00:12:27 preserve the herd or the flock,
00:12:30 or the swarm, and either by looking for new sources of food
00:12:37 or looking for new shelters, right?
00:12:39 Right.
00:12:41 As engineers, when we build swarms, we have a mission.
00:12:46 And when you think of a mission, and it involves mobility,
00:12:52 most often it’s described in some kind
00:12:55 of a global coordinate system.
00:12:56 As a human, as an operator, as a commander,
00:12:59 or as a collaborator, I have my coordinate system,
00:13:03 and I want the robots to be consistent with that.
00:13:07 So I might think of it slightly differently.
00:13:11 I might want the robots to recognize that coordinate system,
00:13:15 which means not only do they have to think locally
00:13:17 in terms of who their immediate neighbors are,
00:13:19 but they have to be cognizant
00:13:20 of what the global environment is.
00:13:24 They have to be cognizant of what the global environment
00:13:27 looks like.
00:13:28 So if I say, surround this building
00:13:31 and protect this from intruders,
00:13:33 well, they’re immediately in a building centered
00:13:35 coordinate system, and I have to tell them
00:13:37 where the building is.
00:13:38 And they’re globally collaborating
00:13:40 on the map of that building.
00:13:41 They’re maintaining some kind of global,
00:13:44 not just in the frame of the building,
00:13:45 but there’s information that’s ultimately being built up
00:13:49 explicitly as opposed to kind of implicitly,
00:13:53 like nature might.
00:13:54 Correct, correct.
00:13:55 So in some sense, nature is very, very sophisticated,
00:13:57 but the tasks that nature solves or needs to solve
00:14:01 are very different from the kind of engineered tasks,
00:14:05 artificial tasks that we are forced to address.
00:14:09 And again, there’s nothing preventing us
00:14:12 from solving these other problems,
00:14:15 but ultimately it’s about impact.
00:14:16 You want these swarms to do something useful.
00:14:19 And so you’re kind of driven into this very unnatural,
00:14:24 if you will.
00:14:25 Unnatural, meaning not like how nature does, setting.
00:14:29 And it’s probably a little bit more expensive
00:14:31 to do it the way nature does,
00:14:33 because nature is less sensitive
00:14:37 to the loss of the individual.
00:14:39 And cost wise in robotics,
00:14:42 I think you’re more sensitive to losing individuals.
00:14:45 I think that’s true, although if you look at the price
00:14:49 to performance ratio of robotic components,
00:14:51 it’s coming down dramatically, right?
00:14:54 It continues to come down.
00:14:56 So I think we’re asymptotically approaching the point
00:14:58 where we would get, yeah,
00:14:59 the cost of individuals would really become insignificant.
00:15:05 So let’s step back at a high level view,
00:15:07 the impossible question of what kind of, as an overview,
00:15:12 what kind of autonomous flying vehicles
00:15:14 are there in general?
00:15:16 I think the ones that receive a lot of notoriety
00:15:19 are obviously the military vehicles.
00:15:22 Military vehicles are controlled by a base station,
00:15:26 but have a lot of human supervision.
00:15:29 But they have limited autonomy,
00:15:31 which is the ability to go from point A to point B.
00:15:34 And even the more sophisticated now,
00:15:37 sophisticated vehicles can do autonomous takeoff
00:15:40 and landing.
00:15:41 And those usually have wings and they’re heavy.
00:15:44 Usually they’re wings,
00:15:45 but then there’s nothing preventing us from doing this
00:15:47 for helicopters as well.
00:15:49 There are many military organizations
00:15:52 that have autonomous helicopters in the same vein.
00:15:56 And by the way, you look at autopilots and airplanes
00:16:00 and it’s actually very similar.
00:16:02 In fact, one interesting question we can ask is,
00:16:07 if you look at all the air safety violations,
00:16:12 all the crashes that occurred,
00:16:14 would they have happened if the plane were truly autonomous?
00:16:18 And I think you’ll find that in many of the cases,
00:16:21 because of pilot error, we made silly decisions.
00:16:24 And so in some sense, even in air traffic,
00:16:26 commercial air traffic, there’s a lot of applications,
00:16:29 although we only see autonomy being enabled
00:16:33 at very high altitudes when the plane is an autopilot.
00:16:38 The plane is an autopilot.
00:16:41 There’s still a role for the human
00:16:42 and that kind of autonomy is, you’re kind of implying,
00:16:47 I don’t know what the right word is,
00:16:48 but it’s a little dumber than it could be.
00:16:53 Right, so in the lab, of course,
00:16:55 we can afford to be a lot more aggressive.
00:16:59 And the question we try to ask is,
00:17:04 can we make robots that will be able to make decisions
00:17:10 without any kind of external infrastructure?
00:17:13 So what does that mean?
00:17:14 So the most common piece of infrastructure
00:17:16 that airplanes use today is GPS.
00:17:20 GPS is also the most brittle form of information.
00:17:26 If you have driven in a city, try to use GPS navigation,
00:17:30 in tall buildings, you immediately lose GPS.
00:17:32 And so that’s not a very sophisticated way
00:17:36 of building autonomy.
00:17:37 I think the second piece of infrastructure
00:17:39 they rely on is communications.
00:17:41 Again, it’s very easy to jam communications.
00:17:47 In fact, if you use wifi, you know that wifi signals
00:17:51 drop out, cell signals drop out.
00:17:53 So to rely on something like that is not good.
00:17:58 The third form of infrastructure we use,
00:18:01 and I hate to call it infrastructure,
00:18:02 but it is that, in the sense of robots, is people.
00:18:06 So you could rely on somebody to pilot you.
00:18:09 And so the question you wanna ask is,
00:18:11 if there are no pilots, there’s no communications
00:18:14 with any base station, if there’s no knowledge of position,
00:18:18 and if there’s no a priori map,
00:18:21 a priori knowledge of what the environment looks like,
00:18:24 a priori model of what might happen in the future,
00:18:28 can robots navigate?
00:18:29 So that is true autonomy.
00:18:31 So that’s true autonomy, and we’re talking about,
00:18:34 you mentioned like military application of drones.
00:18:36 Okay, so what else is there?
00:18:38 You talk about agile, autonomous flying robots,
00:18:42 aerial robots, so that’s a different kind of,
00:18:45 it’s not winged, it’s not big, at least it’s small.
00:18:48 So I use the word agility mostly,
00:18:50 or at least we’re motivated to do agile robots,
00:18:53 mostly because robots can operate
00:18:58 and should be operating in constrained environments.
00:19:02 And if you want to operate the way a global hawk operates,
00:19:06 I mean, the kinds of conditions in which you operate
00:19:09 are very, very restrictive.
00:19:11 If you wanna go inside a building,
00:19:13 for example, for search and rescue,
00:19:15 or to locate an active shooter,
00:19:18 or you wanna navigate under the canopy in an orchard
00:19:22 to look at health of plants,
00:19:23 or to look for, to count fruits,
00:19:28 to measure the tree trunks.
00:19:31 These are things we do, by the way.
00:19:33 There’s some cool agriculture stuff you’ve shown
00:19:35 in the past, it’s really awesome.
00:19:37 So in those kinds of settings, you do need that agility.
00:19:40 Agility does not necessarily mean
00:19:42 you break records for the 100 meters dash.
00:19:45 What it really means is you see the unexpected
00:19:48 and you’re able to maneuver in a safe way,
00:19:51 and in a way that gets you the most information
00:19:55 about the thing you’re trying to do.
00:19:57 By the way, you may be the only person
00:20:00 who, in a TED Talk, has used a math equation,
00:20:04 which is amazing, people should go see one of your TED Talks.
00:20:07 Actually, it’s very interesting,
00:20:08 because the TED curator, Chris Anderson,
00:20:12 told me, you can’t show math.
00:20:15 And I thought about it, but that’s who I am.
00:20:18 I mean, that’s our work.
00:20:20 And so I felt compelled to give the audience a taste
00:20:25 for at least some math.
00:20:27 So on that point, simply, what does it take
00:20:32 to make a thing with four motors fly, a quadcopter,
00:20:37 one of these little flying robots?
00:20:41 How hard is it to make it fly?
00:20:43 How do you coordinate the four motors?
00:20:46 How do you convert those motors into actual movement?
00:20:52 So this is an interesting question.
00:20:54 We’ve been trying to do this since 2000.
00:20:58 It is a commentary on the sensors
00:21:00 that were available back then,
00:21:02 the computers that were available back then.
00:21:05 And a number of things happened between 2000 and 2007.
00:21:11 One is the advances in computing,
00:21:14 which is, so we all know about Moore’s Law,
00:21:16 but I think 2007 was a tipping point,
00:21:19 the year of the iPhone, the year of the cloud.
00:21:22 Lots of things happened in 2007.
00:21:25 But going back even further,
00:21:27 inertial measurement units as a sensor really matured.
00:21:31 Again, lots of reasons for that.
00:21:33 Certainly, there’s a lot of federal funding,
00:21:35 particularly DARPA in the US,
00:21:38 but they didn’t anticipate this boom in IMUs.
00:21:42 But if you look, subsequently what happened
00:21:46 is that every car manufacturer had to put an airbag in,
00:21:50 which meant you had to have an accelerometer on board.
00:21:52 And so that drove down the price to performance ratio.
00:21:55 Wow, I should know this.
00:21:56 That’s very interesting.
00:21:57 That’s very interesting, the connection there.
00:21:59 And that’s why research is very,
00:22:01 it’s very hard to predict the outcomes.
00:22:04 And again, the federal government spent a ton of money
00:22:07 on things that they thought were useful for resonators,
00:22:12 but it ended up enabling these small UAVs, which is great,
00:22:16 because I could have never raised that much money
00:22:18 and sold this project,
00:22:20 hey, we want to build these small UAVs.
00:22:22 Can you actually fund the development of low cost IMUs?
00:22:25 So why do you need an IMU on an IMU?
00:22:27 So I’ll come back to that.
00:22:31 So in 2007, 2008, we were able to build these.
00:22:33 And then the question you’re asking was a good one.
00:22:35 How do you coordinate the motors to develop this?
00:22:40 But over the last 10 years, everything is commoditized.
00:22:43 A high school kid today can pick up
00:22:46 a Raspberry Pi kit and build this.
00:22:50 All the low levels functionality is all automated.
00:22:54 But basically at some level,
00:22:56 you have to drive the motors at the right RPMs,
00:23:01 the right velocity,
00:23:04 in order to generate the right amount of thrust,
00:23:07 in order to position it and orient it in a way
00:23:10 that you need to in order to fly.
00:23:13 The feedback that you get is from onboard sensors,
00:23:16 and the IMU is an important part of it.
00:23:18 The IMU tells you what the acceleration is,
00:23:23 as well as what the angular velocity is.
00:23:26 And those are important pieces of information.
00:23:30 In addition to that, you need some kind of local position
00:23:34 or velocity information.
00:23:37 For example, when we walk,
00:23:39 we implicitly have this information
00:23:41 because we kind of know what our stride length is.
00:23:46 We also are looking at images fly past our retina,
00:23:51 if you will, and so we can estimate velocity.
00:23:54 We also have accelerometers in our head,
00:23:56 and we’re able to integrate all these pieces of information
00:23:59 to determine where we are as we walk.
00:24:02 And so robots have to do something very similar.
00:24:04 You need an IMU, you need some kind of a camera
00:24:08 or other sensor that’s measuring velocity,
00:24:12 and then you need some kind of a global reference frame
00:24:15 if you really want to think about doing something
00:24:19 in a world coordinate system.
00:24:21 And so how do you estimate your position
00:24:23 with respect to that global reference frame?
00:24:25 That’s important as well.
00:24:26 So coordinating the RPMs of the four motors
00:24:29 is what allows you to, first of all, fly and hover,
00:24:32 and then you can change the orientation
00:24:35 and the velocity and so on.
00:24:37 Exactly, exactly.
00:24:38 So it’s a bunch of degrees of freedom
00:24:40 that you’re complaining about.
00:24:41 There’s six degrees of freedom,
00:24:42 but you only have four inputs, the four motors.
00:24:44 And it turns out to be a remarkably versatile configuration.
00:24:50 You think at first, well, I only have four motors,
00:24:53 how do I go sideways?
00:24:55 But it’s not too hard to say, well, if I tilt myself,
00:24:57 I can go sideways, and then you have four motors
00:25:00 pointing up, how do I rotate in place
00:25:03 about a vertical axis?
00:25:05 Well, you rotate them at different speeds
00:25:07 and that generates reaction moments
00:25:09 and that allows you to turn.
00:25:11 So it’s actually a pretty, it’s an optimal configuration
00:25:14 from an engineer standpoint.
00:25:18 It’s very simple, very cleverly done, and very versatile.
00:25:23 So if you could step back to a time,
00:25:27 so I’ve always known flying robots as,
00:25:31 to me, it was natural that a quadcopter should fly.
00:25:35 But when you first started working with it,
00:25:38 how surprised are you that you can make,
00:25:42 do so much with the four motors?
00:25:45 How surprising is it that you can make this thing fly,
00:25:47 first of all, that you can make it hover,
00:25:49 that you can add control to it?
00:25:52 Firstly, this is not, the four motor configuration
00:25:55 is not ours.
00:25:56 You can, it has at least a hundred year history.
00:26:00 And various people, various people try to get quadrotors
00:26:04 to fly without much success.
00:26:08 As I said, we’ve been working on this since 2000.
00:26:10 Our first designs were, well, this is way too complicated.
00:26:14 Why not we try to get an omnidirectional flying robot?
00:26:18 So our early designs, we had eight rotors.
00:26:21 And so these eight rotors were arranged uniformly
00:26:26 on a sphere, if you will.
00:26:28 So you can imagine a symmetric configuration.
00:26:30 And so you should be able to fly anywhere.
00:26:33 But the real challenge we had is the strength to weight ratio
00:26:36 is not enough.
00:26:37 And of course, we didn’t have the sensors and so on.
00:26:40 So everybody knew, or at least the people
00:26:43 who worked with rotorcrafts knew,
00:26:44 four rotors will get it done.
00:26:47 So that was not our idea.
00:26:49 But it took a while before we could actually do
00:26:52 the onboard sensing and the computation that was needed
00:26:56 for the kinds of agile maneuvering that we wanted to do
00:27:01 in our little aerial robots.
00:27:03 And that only happened between 2007 and 2009 in our lab.
00:27:07 Yeah, and you have to send the signal
00:27:09 maybe a hundred times a second.
00:27:12 So the compute there, everything has to come down in price.
00:27:15 And what are the steps of getting from point A to point B?
00:27:21 So we just talked about like local control.
00:27:25 But if all the kind of cool dancing in the air
00:27:30 that I’ve seen you show, how do you make it happen?
00:27:34 How do you make a trajectory?
00:27:37 First of all, okay, figure out a trajectory.
00:27:40 So plan a trajectory.
00:27:41 And then how do you make that trajectory happen?
00:27:44 Yeah, I think planning is a very fundamental problem
00:27:47 in robotics.
00:27:48 I think 10 years ago it was an esoteric thing,
00:27:50 but today with self driving cars,
00:27:53 everybody can understand this basic idea
00:27:55 that a car sees a whole bunch of things
00:27:57 and it has to keep a lane or maybe make a right turn
00:28:00 or switch lanes.
00:28:01 It has to plan a trajectory.
00:28:02 It has to be safe.
00:28:03 It has to be efficient.
00:28:04 So everybody’s familiar with that.
00:28:06 That’s kind of the first step that you have to think about
00:28:10 when you say autonomy.
00:28:14 And so for us, it’s about finding smooth motions,
00:28:19 motions that are safe.
00:28:21 So we think about these two things.
00:28:22 One is optimality, one is safety.
00:28:24 Clearly you cannot compromise safety.
00:28:28 So you’re looking for safe, optimal motions.
00:28:31 The other thing you have to think about is
00:28:34 can you actually compute a reasonable trajectory
00:28:38 in a small amount of time?
00:28:40 Cause you have a time budget.
00:28:42 So the optimal becomes suboptimal,
00:28:45 but in our lab we focus on synthesizing smooth trajectory
00:28:51 that satisfy all the constraints.
00:28:53 In other words, don’t violate any safety constraints
00:28:58 and is as efficient as possible.
00:29:02 And when I say efficient,
00:29:04 it could mean I want to get from point A to point B
00:29:06 as quickly as possible,
00:29:08 or I want to get to it as gracefully as possible,
00:29:12 or I want to consume as little energy as possible.
00:29:15 But always staying within the safety constraints.
00:29:18 But yes, always finding a safe trajectory.
00:29:22 So there’s a lot of excitement and progress
00:29:25 in the field of machine learning
00:29:27 and reinforcement learning
00:29:29 and the neural network variant of that
00:29:32 with deep reinforcement learning.
00:29:33 Do you see a role of machine learning
00:29:36 in, so a lot of the success of flying robots
00:29:40 did not rely on machine learning,
00:29:42 except for maybe a little bit of the perception
00:29:45 on the computer vision side.
00:29:46 On the control side and the planning,
00:29:48 do you see there’s a role in the future
00:29:50 for machine learning?
00:29:51 So let me disagree a little bit with you.
00:29:53 I think we never perhaps called out in my work,
00:29:56 called out learning,
00:29:57 but even this very simple idea of being able to fly
00:30:00 through a constrained space.
00:30:02 The first time you try it, you’ll invariably,
00:30:05 you might get it wrong if the task is challenging.
00:30:08 And the reason is to get it perfectly right,
00:30:12 you have to model everything in the environment.
00:30:15 And flying is notoriously hard to model.
00:30:19 There are aerodynamic effects that we constantly discover.
00:30:26 Even just before I was talking to you,
00:30:29 I was talking to a student about how blades flap
00:30:33 when they fly.
00:30:35 And that ends up changing how a rotorcraft
00:30:40 is accelerated in the angular direction.
00:30:43 Does he use like micro flaps or something?
00:30:46 It’s not micro flaps.
00:30:47 So we assume that each blade is rigid,
00:30:49 but actually it flaps a little bit.
00:30:51 It bends.
00:30:52 Interesting, yeah.
00:30:53 And so the models rely on the fact,
00:30:56 on the assumption that they’re not rigid.
00:30:58 On the assumption that they’re actually rigid,
00:31:00 but that’s not true.
00:31:02 If you’re flying really quickly,
00:31:03 these effects become significant.
00:31:06 If you’re flying close to the ground,
00:31:09 you get pushed off by the ground, right?
00:31:12 Something which every pilot knows when he tries to land
00:31:14 or she tries to land, this is called a ground effect.
00:31:18 Something very few pilots think about
00:31:21 is what happens when you go close to a ceiling
00:31:23 or you get sucked into a ceiling.
00:31:25 There are very few aircrafts
00:31:26 that fly close to any kind of ceiling.
00:31:29 Likewise, when you go close to a wall,
00:31:33 there are these wall effects.
00:31:35 And if you’ve gone on a train
00:31:37 and you pass another train that’s traveling
00:31:39 in the opposite direction, you feel the buffeting.
00:31:42 And so these kinds of microclimates
00:31:45 affect our UAV significantly.
00:31:47 So if you want…
00:31:48 And they’re impossible to model, essentially.
00:31:50 I wouldn’t say they’re impossible to model,
00:31:52 but the level of sophistication you would need
00:31:54 in the model and the software would be tremendous.
00:32:00 Plus, to get everything right would be awfully tedious.
00:32:02 So the way we do this is over time,
00:32:05 we figure out how to adapt to these conditions.
00:32:10 So early on, we use the form of learning
00:32:13 that we call iterative learning.
00:32:15 So this idea, if you want to perform a task,
00:32:18 there are a few things that you need to change
00:32:22 and iterate over a few parameters
00:32:24 that over time you can figure out.
00:32:29 So I could call it policy gradient reinforcement learning,
00:32:33 but actually it was just iterative learning.
00:32:34 Iterative learning.
00:32:36 And so this was there way back.
00:32:37 I think what’s interesting is,
00:32:39 if you look at autonomous vehicles today,
00:32:43 learning occurs, could occur in two pieces.
00:32:45 One is perception, understanding the world.
00:32:47 Second is action, taking actions.
00:32:50 Everything that I’ve seen that is successful
00:32:52 is on the perception side of things.
00:32:54 So in computer vision,
00:32:55 we’ve made amazing strides in the last 10 years.
00:32:57 So recognizing objects, actually detecting objects,
00:33:01 classifying them and tagging them in some sense,
00:33:06 annotating them.
00:33:07 This is all done through machine learning.
00:33:09 On the action side, on the other hand,
00:33:12 I don’t know of any examples
00:33:13 where there are fielded systems
00:33:15 where we actually learn
00:33:17 the right behavior.
00:33:20 Outside of single demonstration is successful.
00:33:22 In the laboratory, this is the holy grail.
00:33:24 Can you do end to end learning?
00:33:26 Can you go from pixels to motor currents?
00:33:30 This is really, really hard.
00:33:32 And I think if you go forward,
00:33:35 the right way to think about these things
00:33:37 is data driven approaches,
00:33:40 learning based approaches,
00:33:42 in concert with model based approaches,
00:33:45 which is the traditional way of doing things.
00:33:47 So I think there’s a piece,
00:33:48 there’s a role for each of these methodologies.
00:33:51 So what do you think,
00:33:52 just jumping out on topic
00:33:53 since you mentioned autonomous vehicles,
00:33:56 what do you think are the limits on the perception side?
00:33:58 So I’ve talked to Elon Musk
00:34:01 and there on the perception side,
00:34:03 they’re using primarily computer vision
00:34:05 to perceive the environment.
00:34:08 In your work with,
00:34:09 because you work with the real world a lot
00:34:12 and the physical world,
00:34:13 what are the limits of computer vision?
00:34:15 Do you think we can solve autonomous vehicles
00:34:19 on the perception side,
00:34:20 focusing on vision alone and machine learning?
00:34:24 So, we also have a spinoff company,
00:34:27 Exxon Technologies that works underground in mines.
00:34:31 So you go into mines, they’re dark, they’re dirty.
00:34:36 You fly in a dirty area,
00:34:38 there’s stuff you kick up from by the propellers,
00:34:41 the downwash kicks up dust.
00:34:42 I challenge you to get a computer vision algorithm
00:34:45 to work there.
00:34:46 So we use LIDARs in that setting.
00:34:51 Indoors and even outdoors when we fly through fields,
00:34:55 I think there’s a lot of potential
00:34:57 for just solving the problem using computer vision alone.
00:35:01 But I think the bigger question is,
00:35:02 can you actually solve
00:35:06 or can you actually identify all the corner cases
00:35:09 using a single sensing modality and using learning alone?
00:35:13 So what’s your intuition there?
00:35:15 So look, if you have a corner case
00:35:17 and your algorithm doesn’t work,
00:35:20 your instinct is to go get data about the corner case
00:35:23 and patch it up, learn how to deal with that corner case.
00:35:27 But at some point, this is gonna saturate,
00:35:32 this approach is not viable.
00:35:34 So today, computer vision algorithms can detect
00:35:38 90% of the objects or can detect objects 90% of the time,
00:35:41 classify them 90% of the time.
00:35:43 Cats on the internet probably can do 95%, I don’t know.
00:35:47 But to get from 90% to 99%, you need a lot more data.
00:35:52 And then I tell you, well, that’s not enough
00:35:54 because I have a safety critical application,
00:35:56 I wanna go from 99% to 99.9%.
00:36:00 That’s even more data.
00:36:01 So I think if you look at wanting accuracy on the X axis
00:36:09 and look at the amount of data on the Y axis,
00:36:14 I believe that curve is an exponential curve.
00:36:16 Wow, okay, it’s even hard if it’s linear.
00:36:19 It’s hard if it’s linear, totally,
00:36:20 but I think it’s exponential.
00:36:22 And the other thing you have to think about
00:36:24 is that this process is a very, very power hungry process
00:36:29 to run data farms or servers.
00:36:32 Power, do you mean literally power?
00:36:34 Literally power, literally power.
00:36:36 So in 2014, five years ago, and I don’t have more recent data,
00:36:41 2% of US electricity consumption was from data farms.
00:36:48 So we think about this as an information science
00:36:52 and information processing problem.
00:36:54 Actually, it is an energy processing problem.
00:36:57 And so unless we figured out better ways of doing this,
00:37:00 I don’t think this is viable.
00:37:02 So talking about driving, which is a safety critical application
00:37:06 and some aspect of flight is safety critical,
00:37:10 maybe philosophical question, maybe an engineering one,
00:37:12 what problem do you think is harder to solve,
00:37:15 autonomous driving or autonomous flight?
00:37:18 That’s a really interesting question.
00:37:19 I think autonomous flight has several advantages
00:37:25 that autonomous driving doesn’t have.
00:37:29 So look, if I want to go from point A to point B,
00:37:32 I have a very, very safe trajectory.
00:37:34 Go vertically up to a maximum altitude,
00:37:36 fly horizontally to just about the destination,
00:37:39 and then come down vertically.
00:37:42 This is preprogrammed.
00:37:45 The equivalent of that is very hard to find
00:37:48 in the self driving car world because you’re on the ground,
00:37:51 you’re in a two dimensional surface,
00:37:53 and the trajectories on the two dimensional surface
00:37:56 are more likely to encounter obstacles.
00:38:00 I mean this in an intuitive sense, but mathematically true.
00:38:03 That’s mathematically as well, that’s true.
00:38:06 There’s other option on the 2G space of platooning,
00:38:10 or because there’s so many obstacles,
00:38:11 you can connect with those obstacles
00:38:13 and all these kind of options.
00:38:14 Sure, but those exist in the three dimensional space as well.
00:38:16 So they do.
00:38:17 So the question also implies how difficult are obstacles
00:38:21 in the three dimensional space in flight?
00:38:23 So that’s the downside.
00:38:25 I think in three dimensional space,
00:38:26 you’re modeling three dimensional world,
00:38:29 not just because you want to avoid it,
00:38:31 but you want to reason about it,
00:38:33 and you want to work in the three dimensional environment,
00:38:35 and that’s significantly harder.
00:38:37 So that’s one disadvantage.
00:38:38 I think the second disadvantage is of course,
00:38:41 anytime you fly, you have to put up
00:38:43 with the peculiarities of aerodynamics
00:38:46 and their complicated environments.
00:38:48 How do you negotiate that?
00:38:49 So that’s always a problem.
00:38:51 Do you see a time in the future where there is,
00:38:55 you mentioned there’s agriculture applications.
00:38:58 So there’s a lot of applications of flying robots,
00:39:01 but do you see a time in the future
00:39:03 where there’s tens of thousands,
00:39:05 or maybe hundreds of thousands of delivery drones
00:39:08 that fill the sky, delivery flying robots?
00:39:12 I think there’s a lot of potential
00:39:14 for the last mile delivery.
00:39:15 And so in crowded cities, I don’t know,
00:39:19 if you go to a place like Hong Kong,
00:39:21 just crossing the river can take half an hour,
00:39:24 and while a drone can just do it in five minutes at most.
00:39:29 I think you look at delivery of supplies to remote villages.
00:39:35 I work with a nonprofit called Weave Robotics.
00:39:38 So they work in the Peruvian Amazon,
00:39:40 where the only highways that are available
00:39:44 are the only highways or rivers.
00:39:47 And to get from point A to point B may take five hours,
00:39:52 while with a drone, you can get there in 30 minutes.
00:39:56 So just delivering drugs,
00:39:59 retrieving samples for testing vaccines,
00:40:05 I think there’s huge potential here.
00:40:07 So I think the challenges are not technological,
00:40:09 but the challenge is economical.
00:40:12 The one thing I’ll tell you that nobody thinks about
00:40:15 is the fact that we’ve not made huge strides
00:40:18 in battery technology.
00:40:20 Yes, it’s true, batteries are becoming less expensive
00:40:23 because we have these mega factories that are coming up,
00:40:26 but they’re all based on lithium based technologies.
00:40:28 And if you look at the energy density
00:40:31 and the power density,
00:40:33 those are two fundamentally limiting numbers.
00:40:38 So power density is important
00:40:39 because for a UAV to take off vertically into the air,
00:40:42 which most drones do, they don’t have a runway,
00:40:46 you consume roughly 200 watts per kilo at the small size.
00:40:51 That’s a lot, right?
00:40:53 In contrast, the human brain consumes less than 80 watts,
00:40:57 the whole of the human brain.
00:40:59 So just imagine just lifting yourself into the air
00:41:03 is like two or three light bulbs,
00:41:06 which makes no sense to me.
00:41:07 Yeah, so you’re going to have to at scale
00:41:10 solve the energy problem then,
00:41:12 charging the batteries, storing the energy and so on.
00:41:18 And then the storage is the second problem,
00:41:20 but storage limits the range.
00:41:22 But you have to remember that you have to burn
00:41:28 a lot of it per given time.
00:41:31 So the burning is another problem.
00:41:32 Which is a power question.
00:41:34 Yes, and do you think just your intuition,
00:41:38 there are breakthroughs in batteries on the horizon?
00:41:44 How hard is that problem?
00:41:46 Look, there are a lot of companies
00:41:47 that are promising flying cars that are autonomous
00:41:53 and that are clean.
00:41:59 I think they’re over promising.
00:42:01 The autonomy piece is doable.
00:42:04 The clean piece, I don’t think so.
00:42:08 There’s another company that I work with called JetOptra.
00:42:11 They make small jet engines.
00:42:15 And they can get up to 50 miles an hour very easily
00:42:18 and lift 50 kilos.
00:42:19 But they’re jet engines, they’re efficient,
00:42:23 they’re a little louder than electric vehicles,
00:42:26 but they can build flying cars.
00:42:28 So your sense is that there’s a lot of pieces
00:42:32 that have come together.
00:42:33 So on this crazy question,
00:42:37 if you look at companies like Kitty Hawk,
00:42:39 working on electric, so the clean,
00:42:43 talking to Sebastian Thrun, right?
00:42:45 It’s a crazy dream, you know?
00:42:48 But you work with flight a lot.
00:42:52 You’ve mentioned before that manned flights
00:42:55 or carrying a human body is very difficult to do.
00:43:01 So how crazy is flying cars?
00:43:04 Do you think there’ll be a day
00:43:05 when we have vertical takeoff and landing vehicles
00:43:11 that are sufficiently affordable
00:43:14 that we’re going to see a huge amount of them?
00:43:17 And they would look like something like we dream of
00:43:19 when we think about flying cars.
00:43:21 Yeah, like the Jetsons.
00:43:22 The Jetsons, yeah.
00:43:23 So look, there are a lot of smart people working on this
00:43:25 and you never say something is not possible
00:43:29 when you have people like Sebastian Thrun working on it.
00:43:32 So I totally think it’s viable.
00:43:35 I question, again, the electric piece.
00:43:38 The electric piece, yeah.
00:43:39 And again, for short distances, you can do it.
00:43:41 And there’s no reason to suggest
00:43:43 that these all just have to be rotorcrafts.
00:43:45 You take off vertically,
00:43:46 but then you morph into a forward flight.
00:43:49 I think there are a lot of interesting designs.
00:43:51 The question to me is, are these economically viable?
00:43:56 And if you agree to do this with fossil fuels,
00:43:59 it instantly immediately becomes viable.
00:44:01 That’s a real challenge.
00:44:03 Do you think it’s possible for robots and humans
00:44:06 to collaborate successfully on tasks?
00:44:08 So a lot of robotics folks that I talk to and work with,
00:44:13 I mean, humans just add a giant mess to the picture.
00:44:18 So it’s best to remove them from consideration
00:44:20 when solving specific tasks.
00:44:22 It’s very difficult to model.
00:44:23 There’s just a source of uncertainty.
00:44:26 In your work with these agile flying robots,
00:44:32 do you think there’s a role for collaboration with humans?
00:44:35 Or is it best to model tasks in a way
00:44:38 that doesn’t have a human in the picture?
00:44:43 Well, I don’t think we should ever think about robots
00:44:46 without human in the picture.
00:44:48 Ultimately, robots are there because we want them
00:44:50 to solve problems for humans.
00:44:54 But there’s no general solution to this problem.
00:44:58 I think if you look at human interaction
00:45:00 and how humans interact with robots,
00:45:02 you know, we think of these in sort of three different ways.
00:45:05 One is the human commanding the robot.
00:45:08 The second is the human collaborating with the robot.
00:45:12 So for example, we work on how a robot
00:45:15 can actually pick up things with a human and carry things.
00:45:18 That’s like true collaboration.
00:45:20 And third, we think about humans as bystanders,
00:45:25 self driving cars, what’s the human’s role
00:45:27 and how do self driving cars
00:45:30 acknowledge the presence of humans?
00:45:32 So I think all of these things are different scenarios.
00:45:35 It depends on what kind of humans, what kind of task.
00:45:39 And I think it’s very difficult to say
00:45:41 that there’s a general theory that we all have for this.
00:45:45 But at the same time, it’s also silly to say
00:45:48 that we should think about robots independent of humans.
00:45:52 So to me, human robot interaction
00:45:55 is almost a mandatory aspect of everything we do.
00:45:59 Yes, but to which degree, so your thoughts,
00:46:02 if we jump to autonomous vehicles, for example,
00:46:05 there’s a big debate between what’s called
00:46:08 level two and level four.
00:46:10 So semi autonomous and autonomous vehicles.
00:46:13 And so the Tesla approach currently at least
00:46:16 has a lot of collaboration between human and machine.
00:46:18 So the human is supposed to actively supervise
00:46:22 the operation of the robot.
00:46:23 Part of the safety definition of how safe a robot is
00:46:29 in that case is how effective is the human in monitoring it.
00:46:32 Do you think that’s ultimately not a good approach
00:46:37 in sort of having a human in the picture,
00:46:42 not as a bystander or part of the infrastructure,
00:46:47 but really as part of what’s required
00:46:50 to make the system safe?
00:46:51 This is harder than it sounds.
00:46:53 I think, you know, if you, I mean,
00:46:58 I’m sure you’ve driven before in highways and so on.
00:47:01 It’s really very hard to have to relinquish control
00:47:06 to a machine and then take over when needed.
00:47:10 So I think Tesla’s approach is interesting
00:47:12 because it allows you to periodically establish
00:47:14 some kind of contact with the car.
00:47:18 Toyota, on the other hand, is thinking about
00:47:20 shared autonomy or collaborative autonomy as a paradigm.
00:47:24 If I may argue, these are very, very simple ways
00:47:27 of human robot collaboration,
00:47:29 because the task is pretty boring.
00:47:31 You sit in a vehicle, you go from point A to point B.
00:47:35 I think the more interesting thing to me is,
00:47:37 for example, search and rescue.
00:47:38 I’ve got a human first responder, robot first responders.
00:47:43 I gotta do something.
00:47:45 It’s important.
00:47:46 I have to do it in two minutes.
00:47:47 The building is burning.
00:47:49 There’s been an explosion.
00:47:50 It’s collapsed.
00:47:51 How do I do it?
00:47:52 I think to me, those are the interesting things
00:47:54 where it’s very, very unstructured.
00:47:57 And what’s the role of the human?
00:47:58 What’s the role of the robot?
00:48:00 Clearly, there’s lots of interesting challenges
00:48:02 and there’s a field.
00:48:03 I think we’re gonna make a lot of progress in this area.
00:48:05 Yeah, it’s an exciting form of collaboration.
00:48:07 You’re right.
00:48:08 In autonomous driving, the main enemy
00:48:11 is just boredom of the human.
00:48:13 Yes.
00:48:13 As opposed to in rescue operations,
00:48:15 it’s literally life and death.
00:48:18 And the collaboration enables
00:48:22 the effective completion of the mission.
00:48:23 So it’s exciting.
00:48:24 In some sense, we’re also doing this.
00:48:27 You think about the human driving a car
00:48:30 and almost invariably, the human’s trying
00:48:33 to estimate the state of the car,
00:48:35 they estimate the state of the environment and so on.
00:48:37 But what if the car were to estimate the state of the human?
00:48:40 So for example, I’m sure you have a smartphone
00:48:41 and the smartphone tries to figure out what you’re doing
00:48:44 and send you reminders and oftentimes telling you
00:48:48 to drive to a certain place,
00:48:49 although you have no intention of going there
00:48:51 because it thinks that that’s where you should be
00:48:53 because of some Gmail calendar entry
00:48:57 or something like that.
00:48:58 And it’s trying to constantly figure out who you are,
00:49:01 what you’re doing.
00:49:02 If a car were to do that,
00:49:04 maybe that would make the driver safer
00:49:06 because the car is trying to figure out
00:49:08 is the driver paying attention,
00:49:09 looking at his or her eyes,
00:49:12 looking at circadian movements.
00:49:14 So I think the potential is there,
00:49:16 but from the reverse side,
00:49:18 it’s not robot modeling, but it’s human modeling.
00:49:21 It’s more on the human, right.
00:49:22 And I think the robots can do a very good job
00:49:25 of modeling humans if you really think about the framework
00:49:29 that you have a human sitting in a cockpit,
00:49:32 surrounded by sensors, all staring at him,
00:49:35 in addition to be staring outside,
00:49:37 but also staring at him.
00:49:39 I think there’s a real synergy there.
00:49:40 Yeah, I love that problem
00:49:42 because it’s the new 21st century form of psychology,
00:49:45 actually AI enabled psychology.
00:49:48 A lot of people have sci fi inspired fears
00:49:51 of walking robots like those from Boston Dynamics.
00:49:54 If you just look at shows on Netflix and so on,
00:49:56 or flying robots like those you work with,
00:49:59 how would you, how do you think about those fears?
00:50:03 How would you alleviate those fears?
00:50:05 Do you have inklings, echoes of those same concerns?
00:50:09 You know, anytime we develop a technology
00:50:11 meaning to have positive impact in the world,
00:50:14 there’s always the worry that,
00:50:17 you know, somebody could subvert those technologies
00:50:21 and use it in an adversarial setting.
00:50:23 And robotics is no exception, right?
00:50:25 So I think it’s very easy to weaponize robots.
00:50:29 I think we talk about swarms.
00:50:31 One thing I worry a lot about is,
00:50:33 so, you know, for us to get swarms to work
00:50:35 and do something reliably, it’s really hard.
00:50:38 But suppose I have this challenge
00:50:42 of trying to destroy something,
00:50:44 and I have a swarm of robots,
00:50:45 where only one out of the swarm
00:50:47 needs to get to its destination.
00:50:48 So that suddenly becomes a lot more doable.
00:50:52 And so I worry about, you know,
00:50:54 this general idea of using autonomy
00:50:56 with lots and lots of agents.
00:51:00 I mean, having said that, look,
00:51:01 a lot of this technology is not very mature.
00:51:03 My favorite saying is that
00:51:06 if somebody had to develop this technology,
00:51:10 wouldn’t you rather the good guys do it?
00:51:12 So the good guys have a good understanding
00:51:13 of the technology, so they can figure out
00:51:15 how this technology is being used in a bad way,
00:51:18 or could be used in a bad way and try to defend against it.
00:51:21 So we think a lot about that.
00:51:22 So we have, we’re doing research
00:51:25 on how to defend against swarms, for example.
00:51:28 That’s interesting.
00:51:29 There’s in fact a report by the National Academies
00:51:32 on counter UAS technologies.
00:51:36 This is a real threat,
00:51:38 but we’re also thinking about how to defend against this
00:51:40 and knowing how swarms work.
00:51:42 Knowing how autonomy works is, I think, very important.
00:51:47 So it’s not just politicians?
00:51:49 Do you think engineers have a role in this discussion?
00:51:51 Absolutely.
00:51:52 I think the days where politicians
00:51:55 can be agnostic to technology are gone.
00:51:59 I think every politician needs to be
00:52:03 literate in technology.
00:52:05 And I often say technology is the new liberal art.
00:52:09 Understanding how technology will change your life,
00:52:12 I think is important.
00:52:14 And every human being needs to understand that.
00:52:18 And maybe we can elect some engineers
00:52:20 to office as well on the other side.
00:52:22 What are the biggest open problems in robotics?
00:52:24 And you said we’re in the early days in some sense.
00:52:27 What are the problems we would like to solve in robotics?
00:52:31 I think there are lots of problems, right?
00:52:32 But I would phrase it in the following way.
00:52:36 If you look at the robots we’re building,
00:52:39 they’re still very much tailored towards
00:52:43 doing specific tasks and specific settings.
00:52:46 I think the question of how do you get them to operate
00:52:49 in much broader settings
00:52:53 where things can change in unstructured environments
00:52:58 is up in the air.
00:52:59 So think of self driving cars.
00:53:02 Today, we can build a self driving car in a parking lot.
00:53:05 We can do level five autonomy in a parking lot.
00:53:10 But can you do a level five autonomy
00:53:13 in the streets of Napoli in Italy or Mumbai in India?
00:53:16 No.
00:53:17 So in some sense, when we think about robotics,
00:53:22 we have to think about where they’re functioning,
00:53:25 what kind of environment, what kind of a task.
00:53:27 We have no understanding
00:53:29 of how to put both those things together.
00:53:32 So we’re in the very early days
00:53:34 of applying it to the physical world.
00:53:35 And I was just in Naples actually.
00:53:38 And there’s levels of difficulty and complexity
00:53:42 depending on which area you’re applying it to.
00:53:45 I think so.
00:53:46 And we don’t have a systematic way of understanding that.
00:53:51 Everybody says, just because a computer
00:53:53 can now beat a human at any board game,
00:53:56 we certainly know something about intelligence.
00:53:59 That’s not true.
00:54:01 A computer board game is very, very structured.
00:54:04 It is the equivalent of working in a Henry Ford factory
00:54:08 where things, parts come, you assemble, move on.
00:54:11 It’s a very, very, very structured setting.
00:54:14 That’s the easiest thing.
00:54:15 And we know how to do that.
00:54:18 So you’ve done a lot of incredible work
00:54:20 at the UPenn, University of Pennsylvania, GraspLab.
00:54:23 You’re now Dean of Engineering at UPenn.
00:54:26 What advice do you have for a new bright eyed undergrad
00:54:31 interested in robotics or AI or engineering?
00:54:34 Well, I think there’s really three things.
00:54:36 One is you have to get used to the idea
00:54:40 that the world will not be the same in five years
00:54:42 or four years whenever you graduate, right?
00:54:45 Which is really hard to do.
00:54:46 So this thing about predicting the future,
00:54:48 every one of us needs to be trying
00:54:50 to predict the future always.
00:54:53 Not because you’ll be any good at it,
00:54:54 but by thinking about it,
00:54:56 I think you sharpen your senses and you become smarter.
00:55:00 So that’s number one.
00:55:02 Number two, it’s a corollary of the first piece,
00:55:05 which is you really don’t know what’s gonna be important.
00:55:09 So this idea that I’m gonna specialize in something
00:55:12 which will allow me to go in a particular direction,
00:55:15 it may be interesting,
00:55:16 but it’s important also to have this breadth
00:55:18 so you have this jumping off point.
00:55:22 I think the third thing,
00:55:23 and this is where I think Penn excels.
00:55:25 I mean, we teach engineering,
00:55:27 but it’s always in the context of the liberal arts.
00:55:29 It’s always in the context of society.
00:55:32 As engineers, we cannot afford to lose sight of that.
00:55:35 So I think that’s important.
00:55:37 But I think one thing that people underestimate
00:55:39 when they do robotics
00:55:40 is the importance of mathematical foundations,
00:55:43 the importance of representations.
00:55:47 Not everything can just be solved
00:55:50 by looking for Ross packages on the internet
00:55:52 or to find a deep neural network that works.
00:55:56 I think the representation question is key,
00:55:59 even to machine learning,
00:56:00 where if you ever hope to achieve or get to explainable AI,
00:56:05 somehow there need to be representations
00:56:07 that you can understand.
00:56:09 So if you wanna do robotics,
00:56:11 you should also do mathematics.
00:56:12 And you said liberal arts, a little literature.
00:56:16 If you wanna build a robot,
00:56:17 it should be reading Dostoyevsky.
00:56:19 I agree with that.
00:56:20 Very good.
00:56:21 So Vijay, thank you so much for talking today.
00:56:23 It was an honor.
00:56:24 Thank you.
00:56:25 It was just a very exciting conversation.
00:56:26 Thank you.