Russ Tedrake: Underactuated Robotics, Control, Dynamics and Touch #114

Transcript

00:00:00 The following is a conversation with Russ Tedrick,

00:00:03 a roboticist and professor at MIT

00:00:05 and vice president of robotics research

00:00:07 at Toyota Research Institute or TRI.

00:00:11 He works on control of robots in interesting,

00:00:15 complicated, underactuated, stochastic,

00:00:18 difficult to model situations.

00:00:19 He’s a great teacher and a great person,

00:00:22 one of my favorites at MIT.

00:00:25 We’ll get into a lot of topics in this conversation

00:00:28 from his time leading MIT’s Delta Robotics Challenge team

00:00:32 to the awesome fact that he often runs

00:00:35 close to a marathon a day to and from work barefoot.

00:00:40 For a world class roboticist interested in elegant,

00:00:43 efficient control of underactuated dynamical systems

00:00:46 like the human body, this fact makes Russ

00:00:50 one of the most fascinating people I know.

00:00:54 Quick summary of the ads.

00:00:55 Three sponsors, Magic Spoon Cereal, BetterHelp,

00:00:59 and ExpressVPN.

00:01:00 Please consider supporting this podcast

00:01:02 by going to magicspoon.com slash lex

00:01:05 and using code lex at checkout,

00:01:07 going to betterhelp.com slash lex

00:01:10 and signing up at expressvpn.com slash lexpod.

00:01:14 Click the links in the description,

00:01:16 buy the stuff, get the discount.

00:01:18 It really is the best way to support this podcast.

00:01:21 If you enjoy this thing, subscribe on YouTube,

00:01:24 review it with five stars on Apple Podcast,

00:01:26 support it on Patreon, or connect with me

00:01:28 on Twitter at lexfreedman.

00:01:31 As usual, I’ll do a few minutes of ads now

00:01:33 and never any ads in the middle

00:01:34 that can break the flow of the conversation.

00:01:37 This episode is supported by Magic Spoon,

00:01:40 low carb keto friendly cereal.

00:01:43 I’ve been on a mix of keto or carnivore diet

00:01:45 for a very long time now.

00:01:47 That means eating very little carbs.

00:01:50 I used to love cereal.

00:01:52 Obviously, most have crazy amounts of sugar,

00:01:54 which is terrible for you, so I quit years ago,

00:01:58 but Magic Spoon is a totally new thing.

00:02:00 Zero sugar, 11 grams of protein,

00:02:03 and only three net grams of carbs.

00:02:05 It tastes delicious.

00:02:07 It has a bunch of flavors, they’re all good,

00:02:09 but if you know what’s good for you,

00:02:11 you’ll go with cocoa, my favorite flavor

00:02:13 and the flavor of champions.

00:02:15 Click the magicspoon.com slash lex link in the description,

00:02:19 use code lex at checkout to get the discount

00:02:22 and to let them know I sent you.

00:02:24 So buy all of their cereal.

00:02:26 It’s delicious and good for you.

00:02:28 You won’t regret it.

00:02:30 This show is also sponsored by BetterHelp,

00:02:33 spelled H E L P Help.

00:02:36 Check it out at betterhelp.com slash lex.

00:02:39 They figure out what you need

00:02:40 and match you with a licensed professional therapist

00:02:43 in under 48 hours.

00:02:44 It’s not a crisis line, it’s not self help,

00:02:47 it is professional counseling done securely online.

00:02:51 As you may know, I’m a bit from the David Goggins line

00:02:53 of creatures and still have some demons to contend with,

00:02:57 usually on long runs or all nighters full of self doubt.

00:03:01 I think suffering is essential for creation,

00:03:04 but you can suffer beautifully

00:03:06 in a way that doesn’t destroy you.

00:03:08 For most people, I think a good therapist can help in this.

00:03:11 So it’s at least worth a try.

00:03:13 Check out the reviews, they’re all good.

00:03:15 It’s easy, private, affordable, available worldwide.

00:03:19 You can communicate by text anytime

00:03:21 and schedule weekly audio and video sessions.

00:03:25 Check it out at betterhelp.com slash lex.

00:03:28 This show is also sponsored by ExpressVPN.

00:03:31 Get it at expressvpn.com slash lex pod

00:03:34 to get a discount and to support this podcast.

00:03:37 Have you ever watched The Office?

00:03:39 If you have, you probably know it’s based

00:03:41 on a UK series also called The Office.

00:03:45 Not to stir up trouble, but I personally think

00:03:48 the British version is actually more brilliant

00:03:50 than the American one, but both are amazing.

00:03:53 Anyway, there are actually nine other countries

00:03:56 with their own version of The Office.

00:03:58 You can get access to them with no geo restriction

00:04:01 when you use ExpressVPN.

00:04:03 It lets you control where you want sites

00:04:05 to think you’re located.

00:04:07 You can choose from nearly 100 different countries,

00:04:10 giving you access to content

00:04:12 that isn’t available in your region.

00:04:14 So again, get it on any device at expressvpn.com slash lex pod

00:04:19 to get an extra three months free

00:04:22 and to support this podcast.

00:04:25 And now here’s my conversation with Russ Tedrick.

00:04:29 What is the most beautiful motion

00:04:31 of an animal or robot that you’ve ever seen?

00:04:36 I think the most beautiful motion of a robot

00:04:38 has to be the passive dynamic walkers.

00:04:41 I think there’s just something fundamentally beautiful.

00:04:43 The ones in particular that Steve Collins built

00:04:45 with Andy Ruina at Cornell, a 3D walking machine.

00:04:50 So it was not confined to a boom or a plane

00:04:54 that you put it on top of a small ramp,

00:04:57 give it a little push, it’s powered only by gravity.

00:05:00 No controllers, no batteries whatsoever.

00:05:04 It just falls down the ramp.

00:05:06 And at the time it looked more natural, more graceful,

00:05:09 more human like than any robot we’d seen to date

00:05:13 powered only by gravity.

00:05:15 How does it work?

00:05:17 Well, okay, the simplest model, it’s kind of like a slinky.

00:05:19 It’s like an elaborate slinky.

00:05:21 One of the simplest models we used to think about it

00:05:23 is actually a rimless wheel.

00:05:25 So imagine taking a bicycle wheel, but take the rim off.

00:05:30 So it’s now just got a bunch of spokes.

00:05:32 If you give that a push,

00:05:33 it still wants to roll down the ramp,

00:05:35 but every time its foot, its spoke comes around

00:05:38 and hits the ground, it loses a little energy.

00:05:41 Every time it takes a step forward,

00:05:43 it gains a little energy.

00:05:45 Those things can come into perfect balance.

00:05:48 And actually they want to, it’s a stable phenomenon.

00:05:51 If it’s going too slow, it’ll speed up.

00:05:53 If it’s going too fast, it’ll slow down

00:05:55 and it comes into a stable periodic motion.

00:05:59 Now you can take that rimless wheel,

00:06:02 which doesn’t look very much like a human walking,

00:06:05 take all the extra spokes away, put a hinge in the middle.

00:06:08 Now it’s two legs.

00:06:09 That’s called our compass gait walker.

00:06:11 That can still, you give it a little push,

00:06:13 it starts falling down a ramp.

00:06:15 It looks a little bit more like walking.

00:06:17 At least it’s a biped.

00:06:19 But what Steve and Andy,

00:06:21 and Tad McGeer started the whole exercise,

00:06:23 but what Steve and Andy did was they took it

00:06:25 to this beautiful conclusion

00:06:28 where they built something that had knees, arms, a torso.

00:06:32 The arms swung naturally, give it a little push.

00:06:36 And that looked like a stroll through the park.

00:06:38 How do you design something like that?

00:06:40 I mean, is that art or science?

00:06:42 It’s on the boundary.

00:06:43 I think there’s a science to getting close to the solution.

00:06:47 I think there’s certainly art in the way

00:06:49 that they made a beautiful robot.

00:06:52 But then the finesse, because they were working

00:06:57 with a system that wasn’t perfectly modeled,

00:06:58 wasn’t perfectly controlled,

00:07:01 there’s all these little tricks

00:07:02 that you have to tune the suction cups at the knees,

00:07:05 for instance, so that they stick,

00:07:07 but then they release at just the right time.

00:07:09 Or there’s all these little tricks of the trade,

00:07:12 which really are art, but it was a point.

00:07:14 I mean, it made the point.

00:07:16 We were, at that time, the walking robot,

00:07:18 the best walking robot in the world was Honda’s Asmo.

00:07:21 Absolutely marvel of modern engineering.

00:07:24 Is this 90s?

00:07:25 This was in 97 when they first released.

00:07:27 It sort of announced P2, and then it went through.

00:07:29 It was Asmo by then in 2004.

00:07:32 And it looks like this very cautious walking,

00:07:37 like you’re walking on hot coals or something like that.

00:07:41 I think it gets a bad rap.

00:07:43 Asmo is a beautiful machine.

00:07:45 It does walk with its knees bent.

00:07:47 Our Atlas walking had its knees bent.

00:07:49 But actually, Asmo was pretty fantastic.

00:07:52 But it wasn’t energy efficient.

00:07:54 Neither was Atlas when we worked on Atlas.

00:07:58 None of our robots that have been that complicated

00:08:00 have been very energy efficient.

00:08:04 But there’s a thing that happens when you do control,

00:08:09 when you try to control a system of that complexity.

00:08:12 You try to use your motors to basically counteract gravity.

00:08:17 Take whatever the world’s doing to you and push back,

00:08:20 erase the dynamics of the world,

00:08:23 and impose the dynamics you want

00:08:25 because you can make them simple and analyzable,

00:08:28 mathematically simple.

00:08:30 And this was a very sort of beautiful example

00:08:34 that you don’t have to do that.

00:08:36 You can just let go.

00:08:37 Let physics do most of the work, right?

00:08:40 And you just have to give it a little bit of energy.

00:08:42 This one only walked down a ramp.

00:08:43 It would never walk on the flat.

00:08:45 To walk on the flat,

00:08:46 you have to give a little energy at some point.

00:08:48 But maybe instead of trying to take the forces imparted

00:08:51 to you by the world and replacing them,

00:08:55 what we should be doing is letting the world push us around

00:08:58 and we go with the flow.

00:08:59 Very zen, very zen robot.

00:09:01 Yeah, but okay, so that sounds very zen,

00:09:03 but I can also imagine how many like failed versions

00:09:10 they had to go through.

00:09:11 Like how many, like, I would say it’s probably,

00:09:14 would you say it’s in the thousands

00:09:15 that they’ve had to have the system fall down

00:09:17 before they figured out how to get it?

00:09:19 I don’t know if it’s thousands, but it’s a lot.

00:09:22 It takes some patience.

00:09:23 There’s no question.

00:09:25 So in that sense, control might help a little bit.

00:09:28 Oh, I think everybody, even at the time,

00:09:32 said that the answer is to do with that with control.

00:09:35 But it was just pointing out

00:09:36 that maybe the way we’re doing control right now

00:09:39 isn’t the way we should.

00:09:41 Got it.

00:09:41 So what about on the animal side,

00:09:43 the ones that figured out how to move efficiently?

00:09:46 Is there anything you find inspiring or beautiful

00:09:49 in the movement of any particular animal?

00:09:51 I do have a favorite example.

00:09:51 Okay.

00:09:52 So it sort of goes with the passive walking idea.

00:09:57 So is there, you know, how energy efficient are animals?

00:10:01 Okay, there’s a great series of experiments

00:10:03 by George Lauder at Harvard and Mike Tranofilo at MIT.

00:10:07 They were studying fish swimming in a water tunnel.

00:10:10 Okay.

00:10:11 And one of these, the type of fish they were studying

00:10:15 were these rainbow trout,

00:10:17 because there was a phenomenon well understood

00:10:20 that rainbow trout, when they’re swimming upstream

00:10:22 in mating season, they kind of hang out behind the rocks.

00:10:25 And it looks like, I mean,

00:10:26 that’s tiring work swimming upstream.

00:10:28 They’re hanging out behind the rocks.

00:10:29 Maybe there’s something energetically interesting there.

00:10:31 So they tried to recreate that.

00:10:33 They put in this water tunnel, a rock basically,

00:10:36 a cylinder that had the same sort of vortex street,

00:10:40 the eddies coming off the back of the rock

00:10:42 that you would see in a stream.

00:10:44 And they put a real fish behind this

00:10:46 and watched how it swims.

00:10:48 And the amazing thing is that if you watch from above

00:10:51 what the fish swims when it’s not behind a rock,

00:10:53 it has a particular gate.

00:10:56 You can identify the fish the same way you look

00:10:58 at a human walking down the street.

00:10:59 You sort of have a sense of how a human walks.

00:11:02 The fish has a characteristic gate.

00:11:05 You put that fish behind the rock, its gate changes.

00:11:09 And what they saw was that it was actually resonating

00:11:12 and kind of surfing between the vortices.

00:11:16 Now, here was the experiment that really was the clincher.

00:11:20 Because there was still, it wasn’t clear how much of that

00:11:22 was mechanics of the fish,

00:11:24 how much of that is control, the brain.

00:11:26 So the clincher experiment,

00:11:28 and maybe one of my favorites to date,

00:11:29 although there are many good experiments.

00:11:33 They took, this was now a dead fish.

00:11:38 They took a dead fish.

00:11:40 They put a string that went,

00:11:41 that tied the mouth of the fish to the rock

00:11:44 so it couldn’t go back and get caught in the grates.

00:11:47 And then they asked what would that dead fish do

00:11:49 when it was hanging out behind the rock?

00:11:51 And so what you’d expect, it sort of flopped around

00:11:52 like a dead fish in the vortex wake

00:11:56 until something sort of amazing happens.

00:11:57 And this video is worth putting in, right?

00:12:02 What happens?

00:12:04 The dead fish basically starts swimming upstream, right?

00:12:07 It’s completely dead, no brain, no motors, no control.

00:12:12 But it’s somehow the mechanics of the fish

00:12:14 resonate with the vortex street

00:12:16 and it starts swimming upstream.

00:12:18 It’s one of the best examples ever.

00:12:20 Who do you give credit for that to?

00:12:23 Is that just evolution constantly just figuring out

00:12:27 by killing a lot of generations of animals,

00:12:30 like the most efficient motion?

00:12:33 Is that, or maybe the physics of our world completely like,

00:12:38 is like if evolution applied not only to animals,

00:12:40 but just the entirety of it somehow drives to efficiency,

00:12:45 like nature likes efficiency?

00:12:47 I don’t know if that question even makes any sense.

00:12:49 I understand the question.

00:12:51 That’s reasonable.

00:12:51 I mean, do they co evolve?

00:12:54 Yeah, somehow co, yeah.

00:12:55 Like I don’t know if an environment can evolve, but.

00:13:00 I mean, there are experiments that people do,

00:13:02 careful experiments that show that animals can adapt

00:13:05 to unusual situations and recover efficiency.

00:13:08 So there seems like at least in one direction,

00:13:11 I think there is reason to believe

00:13:12 that the animal’s motor system and probably its mechanics

00:13:18 adapt in order to be more efficient.

00:13:20 But efficiency isn’t the only goal, of course.

00:13:23 Sometimes it’s too easy to think about only efficiency,

00:13:26 but we have to do a lot of other things first, not get eaten.

00:13:30 And then all other things being equal, try to save energy.

00:13:34 By the way, let’s draw a distinction

00:13:36 between control and mechanics.

00:13:38 Like how would you define each?

00:13:40 Yeah.

00:13:41 I mean, I think part of the point is that

00:13:43 we shouldn’t draw a line as clearly as we tend to.

00:13:47 But on a robot, we have motors

00:13:51 and we have the links of the robot, let’s say.

00:13:54 If the motors are turned off,

00:13:56 the robot has some passive dynamics, okay?

00:13:59 Gravity does the work.

00:14:01 You can put springs, I would call that mechanics, right?

00:14:03 If we have springs and dampers,

00:14:04 which our muscles are springs and dampers and tendons.

00:14:08 But then you have something that’s doing active work,

00:14:10 putting energy in, which are your motors on the robot.

00:14:13 The controller’s job is to send commands to the motor

00:14:16 that add new energy into the system, right?

00:14:19 So the mechanics and control interplay somewhere,

00:14:22 the divide is around, you know,

00:14:24 did you decide to send some commands to your motor

00:14:27 or did you just leave the motors off,

00:14:28 let them do their work?

00:14:30 Would you say is most of nature

00:14:35 on the dynamic side or the control side?

00:14:39 So like, if you look at biological systems,

00:14:43 we’re living in a pandemic now,

00:14:45 like, do you think a virus is a,

00:14:47 do you think it’s a dynamic system

00:14:50 or is there a lot of control, intelligence?

00:14:54 I think it’s both, but I think we maybe have underestimated

00:14:57 how important the dynamics are, right?

00:15:02 I mean, even our bodies, the mechanics of our bodies,

00:15:04 certainly with exercise, they evolve.

00:15:06 But so I actually, I lost a finger in early 2000s

00:15:11 and it’s my fifth metacarpal.

00:15:14 And it turns out you use that a lot

00:15:16 in ways you don’t expect when you’re opening jars,

00:15:19 even when I’m just walking around,

00:15:20 if I bump it on something, there’s a bone there

00:15:23 that was used to taking contact.

00:15:26 My fourth metacarpal wasn’t used to taking contact,

00:15:28 it used to hurt, it still does a little bit.

00:15:31 But actually my bone has remodeled, right?

00:15:34 Over a couple of years, the geometry,

00:15:39 the mechanics of that bone changed

00:15:42 to address the new circumstances.

00:15:44 So the idea that somehow it’s only our brain

00:15:46 that’s adapting or evolving is not right.

00:15:50 Maybe sticking on evolution for a bit,

00:15:52 because it’s tended to create some interesting things.

00:15:56 Bipedal walking, why the heck did evolution give us,

00:16:01 I think we’re, are we the only mammals that walk on two feet?

00:16:05 No, I mean, there’s a bunch of animals that do it a bit.

00:16:09 A bit.

00:16:09 I think we are the most successful bipeds.

00:16:12 I think I read somewhere that the reason

00:16:17 the evolution made us walk on two feet

00:16:22 is because there’s an advantage

00:16:24 to being able to carry food back to the tribe

00:16:27 or something like that.

00:16:28 So like you can carry, it’s kind of this communal,

00:16:31 cooperative thing, so like to carry stuff back

00:16:35 to a place of shelter and so on to share with others.

00:16:40 Do you understand at all the value of walking on two feet

00:16:44 from both a robotics and a human perspective?

00:16:48 Yeah, there are some great books written

00:16:50 about evolution of, walking evolution of the human body.

00:16:54 I think it’s easy though to make bad evolutionary arguments.

00:17:00 Sure, most of them are probably bad,

00:17:03 but what else can we do?

00:17:06 I mean, I think a lot of what dominated our evolution

00:17:11 probably was not the things that worked well

00:17:15 sort of in the steady state, you know,

00:17:18 when things are good, but for instance,

00:17:22 people talk about what we should eat now

00:17:25 because our ancestors were meat eaters or whatever.

00:17:28 Oh yeah, I love that, yeah.

00:17:30 But probably, you know, the reason

00:17:32 that one pre Homo sapiens species versus another survived

00:17:39 was not because of whether they ate well

00:17:43 when there was lots of food.

00:17:45 But when the ice age came, you know,

00:17:47 probably one of them happened to be in the wrong place.

00:17:50 One of them happened to forage a food that was okay

00:17:54 even when the glaciers came or something like that, I mean.

00:17:58 There’s a million variables that contributed

00:18:00 and we can’t, and our, actually the amount of information

00:18:04 we’re working with and telling these stories,

00:18:06 these evolutionary stories is very little.

00:18:10 So yeah, just like you said, it seems like,

00:18:13 if you study history, it seems like history turns

00:18:15 on like these little events that otherwise

00:18:20 would seem meaningless, but in a grant,

00:18:23 like when you, in retrospect, were turning points.

00:18:27 Absolutely.

00:18:28 And that’s probably how like somebody got hit in the head

00:18:31 with a rock because somebody slept with the wrong person

00:18:35 back in the cave days and somebody get angry

00:18:38 and that turned, you know, warring tribes

00:18:41 combined with the environment, all those millions of things

00:18:45 and the meat eating, which I get a lot of criticism

00:18:47 because I don’t know what your dietary processes are like,

00:18:51 but these days I’ve been eating only meat,

00:18:55 which is, there’s a large community of people who say,

00:18:59 yeah, probably make evolutionary arguments

00:19:01 and say you’re doing a great job.

00:19:02 There’s probably an even larger community of people,

00:19:05 including my mom, who says it’s deeply unhealthy,

00:19:08 it’s wrong, but I just feel good doing it.

00:19:10 But you’re right, these evolutionary arguments

00:19:12 can be flawed, but is there anything interesting

00:19:15 to pull out for?

00:19:17 There’s a great book, by the way,

00:19:19 well, a series of books by Nicholas Taleb

00:19:21 about Fooled by Randomness and Black Swan.

00:19:24 Highly recommend them, but yeah,

00:19:26 they make the point nicely that probably

00:19:29 it was a few random events that, yes,

00:19:34 maybe it was someone getting hit by a rock, as you say.

00:19:39 That said, do you think, I don’t know how to ask this

00:19:42 question or how to talk about this,

00:19:44 but there’s something elegant and beautiful

00:19:45 about moving on two feet, obviously biased

00:19:48 because I’m human, but from a robotics perspective, too,

00:19:53 you work with robots on two feet,

00:19:56 is it all useful to build robots that are on two feet

00:20:00 as opposed to four?

00:20:01 Is there something useful about it?

00:20:02 I think the most, I mean, the reason I spent a long time

00:20:05 working on bipedal walking was because it was hard

00:20:09 and it challenged control theory in ways

00:20:12 that I thought were important.

00:20:13 I wouldn’t have ever tried to convince you

00:20:18 that you should start a company around bipeds

00:20:22 or something like this.

00:20:24 There are people that make pretty compelling arguments.

00:20:26 I think the most compelling one is that the world

00:20:28 is built for the human form, and if you want a robot

00:20:32 to work in the world we have today,

00:20:34 then having a human form is a pretty good way to go.

00:20:39 There are places that a biped can go that would be hard

00:20:42 for other form factors to go, even natural places,

00:20:47 but at some point in the long run,

00:20:51 we’ll be building our environments for our robots, probably,

00:20:54 and so maybe that argument falls aside.

00:20:56 So you famously run barefoot.

00:21:00 Do you still run barefoot?

00:21:02 I still run barefoot.

00:21:03 That’s so awesome.

00:21:04 Much to my wife’s chagrin.

00:21:07 Do you want to make an evolutionary argument

00:21:09 for why running barefoot is advantageous?

00:21:12 What have you learned about human and robot movement

00:21:17 in general from running barefoot?

00:21:21 Human or robot and or?

00:21:23 Well, you know, it happened the other way, right?

00:21:25 So I was studying walking robots,

00:21:27 and there’s a great conference called

00:21:31 the Dynamic Walking Conference where it brings together

00:21:35 both the biomechanics community

00:21:36 and the walking robots community.

00:21:39 And so I had been going to this for years

00:21:41 and hearing talks by people who study barefoot running

00:21:45 and other, the mechanics of running.

00:21:48 So I did eventually read Born to Run.

00:21:50 Most people read Born to Run in the first, right?

00:21:54 The other thing I had going for me is actually

00:21:55 that I wasn’t a runner before,

00:21:58 and I learned to run after I had learned

00:22:01 about barefoot running, or I mean,

00:22:03 started running longer distances.

00:22:05 So I didn’t have to unlearn.

00:22:07 And I’m definitely, I’m a big fan of it for me,

00:22:11 but I’m not going to,

00:22:12 I tend to not try to convince other people.

00:22:14 There’s people who run beautifully with shoes on,

00:22:17 and that’s good.

00:22:20 But here’s why it makes sense for me.

00:22:24 It’s all about the longterm game, right?

00:22:26 So I think it’s just too easy to run 10 miles,

00:22:29 feel pretty good, and then you get home at night

00:22:31 and you realize my knees hurt.

00:22:33 I did something wrong, right?

00:22:37 If you take your shoes off,

00:22:39 then if you hit hard with your foot at all,

00:22:44 then it hurts.

00:22:45 You don’t like run 10 miles

00:22:47 and then realize you’ve done some damage.

00:22:50 You have immediate feedback telling you

00:22:52 that you’ve done something that’s maybe suboptimal,

00:22:55 and you change your gait.

00:22:56 I mean, it’s even subconscious.

00:22:57 If I, right now, having run many miles barefoot,

00:23:00 if I put a shoe on, my gait changes

00:23:03 in a way that I think is not as good.

00:23:05 So it makes me land softer.

00:23:09 And I think my goals for running

00:23:13 are to do it for as long as I can into old age,

00:23:16 not to win any races.

00:23:19 And so for me, this is a way to protect myself.

00:23:23 Yeah, I think, first of all,

00:23:25 I’ve tried running barefoot many years ago,

00:23:29 probably the other way,

00:23:30 just reading Born to Run.

00:23:33 But just to understand,

00:23:36 because I felt like I couldn’t put in the miles

00:23:39 that I wanted to.

00:23:40 And it feels like running for me,

00:23:44 and I think for a lot of people,

00:23:46 was one of those activities that we do often

00:23:48 and we never really try to learn to do correctly.

00:23:53 Like, it’s funny, there’s so many activities

00:23:55 we do every day, like brushing our teeth, right?

00:24:00 I think a lot of us, at least me,

00:24:02 probably have never deeply studied

00:24:04 how to properly brush my teeth, right?

00:24:07 Or wash, as now with the pandemic,

00:24:08 or how to properly wash our hands.

00:24:10 We do it every day, but we haven’t really studied,

00:24:13 like, am I doing this correctly?

00:24:15 But running felt like one of those things,

00:24:17 it was absurd not to study how to do correctly,

00:24:20 because it’s the source of so much pain and suffering.

00:24:23 Like, I hate running, but I do it.

00:24:25 I do it because I hate it, but I feel good afterwards.

00:24:28 But I think it feels like you need

00:24:30 to learn how to do it properly.

00:24:31 So that’s where barefoot running came in,

00:24:33 and then I quickly realized that my gait

00:24:35 was completely wrong.

00:24:38 I was taking huge steps,

00:24:41 and landing hard on the heel, all those elements.

00:24:45 And so, yeah, from that I actually learned

00:24:47 to take really small steps, look.

00:24:50 I already forgot the number,

00:24:52 but I feel like it was 180 a minute or something like that.

00:24:55 And I remember I actually just took songs

00:25:00 that are 180 beats per minute,

00:25:03 and then like tried to run at that beat,

00:25:06 and just to teach myself.

00:25:07 It took a long time, and I feel like after a while,

00:25:11 you learn to run, you adjust properly,

00:25:14 without going all the way to barefoot.

00:25:15 But I feel like barefoot is the legit way to do it.

00:25:19 I mean, I think a lot of people

00:25:21 would be really curious about it.

00:25:23 Can you, if they’re interested in trying,

00:25:25 what would you, how would you recommend

00:25:27 they start, or try, or explore?

00:25:30 Slowly.

00:25:31 That’s the biggest thing people do,

00:25:33 is they are excellent runners,

00:25:35 and they’re used to running long distances,

00:25:37 or running fast, and they take their shoes off,

00:25:39 and they hurt themselves instantly trying to do

00:25:42 something that they were used to doing.

00:25:44 I think I lucked out in the sense

00:25:46 that I couldn’t run very far when I first started trying.

00:25:50 And I run with minimal shoes too.

00:25:51 I mean, I will bring along a pair of,

00:25:54 actually, like aqua socks or something like this,

00:25:56 I can just slip on, or running sandals,

00:25:58 I’ve tried all of them.

00:26:00 What’s the difference between a minimal shoe

00:26:02 and nothing at all?

00:26:03 What’s, like, feeling wise, what does it feel like?

00:26:07 There is a, I mean, I notice my gait changing, right?

00:26:10 So, I mean, your foot has as many muscles

00:26:15 and sensors as your hand does, right?

00:26:17 Sensors, ooh, okay.

00:26:19 And we do amazing things with our hands.

00:26:23 And we stick our foot in a big, solid shoe, right?

00:26:26 So there’s, I think, you know, when you’re barefoot,

00:26:29 you’re just giving yourself more proprioception.

00:26:33 And that’s why you’re more aware of some of the gait flaws

00:26:35 and stuff like this.

00:26:37 Now, you have less protection too, so.

00:26:40 Rocks and stuff.

00:26:42 I mean, yeah, so I think people who are afraid

00:26:45 of barefoot running are worried about getting cuts

00:26:47 or stepping on rocks.

00:26:49 First of all, even if that was a concern,

00:26:51 I think those are all, like, very short term.

00:26:54 You know, if I get a scratch or something,

00:26:55 it’ll heal in a week.

00:26:56 If I blow out my knees, I’m done running forever.

00:26:58 So I will trade the short term for the long term anytime.

00:27:01 But even then, you know, and this, again,

00:27:04 to my wife’s chagrin, your feet get tough, right?

00:27:07 And, yeah, I can run over almost anything now.

00:27:13 I mean, what, can you talk about,

00:27:17 is there, like, is there tips or tricks

00:27:21 that you have, suggestions about,

00:27:24 like, if I wanted to try it?

00:27:26 You know, there is a good book, actually.

00:27:29 There’s probably more good books since I read them.

00:27:32 But Ken Bob, Barefoot Ken Bob Saxton.

00:27:37 He’s an interesting guy.

00:27:38 But I think his book captures the right way

00:27:42 to describe running, barefoot running,

00:27:44 to somebody better than any other I’ve seen.

00:27:48 So you run pretty good distances, and you bike,

00:27:52 and is there, you know, if we talk about bucket list items,

00:27:57 is there something crazy on your bucket list,

00:28:00 athletically, that you hope to do one day?

00:28:04 I mean, my commute is already a little crazy.

00:28:07 What are we talking about here?

00:28:09 What distance are we talking about?

00:28:11 Well, I live about 12 miles from MIT,

00:28:14 but you can find lots of different ways to get there.

00:28:16 So, I mean, I’ve run there for many years, I’ve biked there.

00:28:20 Old ways?

00:28:21 Yeah, but normally I would try to run in

00:28:23 and then bike home, bike in, run home.

00:28:25 But you have run there and back before?

00:28:28 Sure.

00:28:28 Barefoot?

00:28:29 Yeah, or with minimal shoes or whatever that.

00:28:32 12, 12 times two?

00:28:34 Yeah.

00:28:35 Okay.

00:28:36 It became kind of a game of how can I get to work?

00:28:38 I’ve rollerbladed, I’ve done all kinds of weird stuff,

00:28:41 but my favorite one these days,

00:28:42 I’ve been taking the Charles River to work.

00:28:45 So, I can put in the rowboat not so far from my house,

00:28:50 but the Charles River takes a long way to get to MIT,

00:28:53 so I can spend a long time getting there.

00:28:56 And it’s not about, I don’t know, it’s just about,

00:29:01 I’ve had people ask me,

00:29:02 how can you justify taking that time?

00:29:05 But for me, it’s just a magical time to think,

00:29:10 to compress, decompress.

00:29:13 Especially, I’ll wake up, do a lot of work in the morning,

00:29:16 and then I kind of have to just let that settle

00:29:19 before I’m ready for all my meetings.

00:29:20 And then on the way home, it’s a great time to sort of

00:29:23 let that settle.

00:29:24 You lead a large group of people.

00:29:31 Is there days where you’re like,

00:29:33 oh shit, I gotta get to work in an hour?

00:29:36 Like, I mean, is there a tension there?

00:29:45 And like, if we look at the grand scheme of things,

00:29:47 just like you said, long term,

00:29:49 that meeting probably doesn’t matter.

00:29:51 Like, you can always say, I’ll just, I’ll run

00:29:54 and let the meeting happen, how it happens.

00:29:57 Like, what, how do you, that zen, how do you,

00:30:02 what do you do with that tension

00:30:03 between the real world saying urgently,

00:30:05 you need to be there, this is important,

00:30:08 everything is melting down,

00:30:10 how are we gonna fix this robot?

00:30:11 There’s this critical meeting,

00:30:14 and then there’s this, the zen beauty of just running,

00:30:18 the simplicity of it, you along with nature.

00:30:21 What do you do with that?

00:30:22 I would say I’m not a fast runner, particularly.

00:30:25 Probably my fastest splits ever was when

00:30:27 I had to get to daycare on time

00:30:29 because they were gonna charge me, you know,

00:30:30 some dollar per minute that I was late.

00:30:33 I’ve run some fast splits to daycare.

00:30:36 But those times are past now.

00:30:41 I think work, you can find a work life balance in that way.

00:30:44 I think you just have to.

00:30:47 I think I am better at work

00:30:48 because I take time to think on the way in.

00:30:52 So I plan my day around it,

00:30:55 and I rarely feel that those are really at odds.

00:31:00 So what, the bucket list item.

00:31:03 If we’re talking 12 times two, or approaching a marathon,

00:31:10 what, have you run an ultra marathon before?

00:31:15 Do you do races?

00:31:16 Is there, what’s a…

00:31:17 Not to win.

00:31:21 I’m not gonna like take a dinghy across the Atlantic

00:31:23 or something if that’s what you want.

00:31:24 But if someone does and wants to write a book,

00:31:27 I would totally read it

00:31:28 because I’m a sucker for that kind of thing.

00:31:31 No, I do have some fun things that I will try.

00:31:33 You know, I like to, when I travel,

00:31:35 I almost always bike to Logan Airport

00:31:37 and fold up a little folding bike

00:31:38 and then take it with me and bike to wherever I’m going.

00:31:41 And it’s taken me,

00:31:42 or I’ll take a stand up paddle board these days

00:31:44 on the airplane,

00:31:45 and then I’ll try to paddle around where I’m going

00:31:47 or whatever.

00:31:47 And I’ve done some crazy things, but…

00:31:50 But not for the, you know, I now talk,

00:31:55 I don’t know if you know who David Goggins is by any chance.

00:31:57 Not well, but yeah.

00:31:58 But I talk to him now every day.

00:32:00 So he’s the person who made me do this stupid challenge.

00:32:05 So he’s insane and he does things for the purpose

00:32:10 in the best kind of way.

00:32:11 He does things like for the explicit purpose of suffering.

00:32:16 Like he picks the thing that,

00:32:18 like whatever he thinks he can do, he does more.

00:32:22 So is that, do you have that thing in you or are you…

00:32:27 I think it’s become the opposite.

00:32:29 It’s a…

00:32:30 So you’re like that dynamical system

00:32:32 that the walker, the efficient…

00:32:34 Yeah, it’s leave no pain, right?

00:32:38 You should end feeling better than you started.

00:32:40 Okay.

00:32:41 But it’s mostly, I think, and COVID has tested this

00:32:45 because I’ve lost my commute.

00:32:47 I think I’m perfectly happy walking around town

00:32:51 with my wife and kids if they could get them to go.

00:32:55 And it’s more about just getting outside

00:32:57 and getting away from the keyboard for some time

00:32:59 just to let things compress.

00:33:02 Let’s go into robotics a little bit.

00:33:04 What to use the most beautiful idea in robotics?

00:33:07 Whether we’re talking about control

00:33:10 or whether we’re talking about optimization

00:33:12 and the math side of things or the engineering side of things

00:33:16 or the philosophical side of things.

00:33:20 I think I’ve been lucky to experience something

00:33:23 that not so many roboticists have experienced,

00:33:27 which is to hang out

00:33:30 with some really amazing control theorists.

00:33:34 And the clarity of thought

00:33:40 that some of the more mathematical control theory

00:33:43 can bring to even very complex, messy looking problems

00:33:49 is really, it really had a big impact on me

00:33:53 and I had a day even just a couple of weeks ago

00:33:57 where I had spent the day on a Zoom robotics conference

00:34:01 having great conversations with lots of people.

00:34:04 Felt really good about the ideas

00:34:06 that were flowing and the like.

00:34:09 And then I had a late afternoon meeting

00:34:12 with one of my favorite control theorists

00:34:15 and we went from these abstract discussions

00:34:20 about maybes and what ifs and what a great idea

00:34:25 to these super precise statements

00:34:30 about systems that aren’t that much more simple

00:34:33 or abstract than the ones I care about deeply.

00:34:38 And the contrast of that is,

00:34:42 I don’t know, it really gets me.

00:34:43 I think people underestimate

00:34:47 maybe the power of clear thinking.

00:34:51 And so for instance, deep learning is amazing.

00:34:58 I use it heavily in our work.

00:35:00 I think it’s changed the world, unquestionable.

00:35:04 It makes it easy to get things to work

00:35:07 without thinking as critically about it.

00:35:08 So I think one of the challenges as an educator

00:35:11 is to think about how do we make sure people get a taste

00:35:14 of the more rigorous thinking

00:35:17 that I think goes along with some different approaches.

00:35:22 Yeah, so that’s really interesting.

00:35:24 So understanding like the fundamentals,

00:35:26 the first principles of the problem,

00:35:31 where in this case it’s mechanics,

00:35:33 like how a thing moves, how a thing behaves,

00:35:38 like all the forces involved,

00:35:40 like really getting a deep understanding of that.

00:35:42 I mean, from physics, the first principle thing

00:35:45 come from physics, and here it’s literally physics.

00:35:50 Yeah, and this applies, in deep learning,

00:35:51 this applies to not just, I mean,

00:35:54 it applies so cleanly in robotics,

00:35:57 but it also applies to just in any data set.

00:36:01 I find this true, I mean, driving as well.

00:36:05 There’s a lot of folks in that work on autonomous vehicles

00:36:09 that work on autonomous vehicles that don’t study driving,

00:36:17 like deeply.

00:36:20 I might be coming a little bit from the psychology side,

00:36:23 but I remember I spent a ridiculous number of hours

00:36:28 at lunch, at this like lawn chair,

00:36:31 and I would sit somewhere in MIT’s campus,

00:36:35 there’s a few interesting intersections,

00:36:37 and we’d just watch people cross.

00:36:39 So we were studying pedestrian behavior,

00:36:43 and I felt like, as we record a lot of video,

00:36:46 to try, and then there’s the computer vision

00:36:47 extracts their movement, how they move their head, and so on,

00:36:50 but like every time, I felt like I didn’t understand enough.

00:36:55 I just, I felt like I wasn’t understanding

00:36:58 what, how are people signaling to each other,

00:37:01 what are they thinking,

00:37:03 how cognizant are they of their fear of death?

00:37:07 Like, what’s the underlying game theory here?

00:37:11 What are the incentives?

00:37:14 And then I finally found a live stream of an intersection

00:37:17 that’s like high def that I just, I would watch

00:37:20 so I wouldn’t have to sit out there.

00:37:21 But it’s interesting, so like, I feel.

00:37:23 But that’s tough, that’s a tough example,

00:37:25 because I mean, the learning.

00:37:27 Humans are involved.

00:37:28 Not just because human, but I think the learning mantra

00:37:33 is that basically the statistics of the data

00:37:35 will tell me things I need to know, right?

00:37:37 And, you know, for the example you gave

00:37:41 of all the nuances of, you know, eye contact,

00:37:45 or hand gestures, or whatever that are happening

00:37:47 for these subtle interactions

00:37:48 between pedestrians and traffic, right?

00:37:51 Maybe the data will tell that story.

00:37:54 I maybe even, one level more meta than what you’re saying.

00:38:01 For a particular problem,

00:38:02 I think it might be the case

00:38:03 that data should tell us the story.

00:38:07 But I think there’s a rigorous thinking

00:38:09 that is just an essential skill

00:38:11 for a mathematician or an engineer

00:38:14 that I just don’t wanna lose it.

00:38:18 There are certainly super rigorous control,

00:38:22 or sorry, machine learning people.

00:38:24 I just think deep learning makes it so easy

00:38:28 to do some things that our next generation,

00:38:31 are not immediately rewarded

00:38:35 for going through some of the more rigorous approaches.

00:38:38 And then I wonder where that takes us.

00:38:40 Well, I’m actually optimistic about it.

00:38:42 I just want to do my part

00:38:44 to try to steer that rigorous thinking.

00:38:48 So there’s like two questions I wanna ask.

00:38:50 Do you have sort of a good example of rigorous thinking

00:38:56 where it’s easy to get lazy and not do the rigorous thinking?

00:39:00 And the other question I have is like,

00:39:02 do you have advice of how to practice rigorous thinking

00:39:09 in all the computer science disciplines that we’ve mentioned?

00:39:16 Yeah, I mean, there are times where problems

00:39:21 that can be solved with well known mature methods

00:39:25 could also be solved with a deep learning approach.

00:39:30 And there’s an argument that you must use learning

00:39:36 even for the parts we already think we know,

00:39:38 because if the human has touched it,

00:39:39 then you’ve biased the system

00:39:42 and you’ve suddenly put a bottleneck in there

00:39:44 that is your own mental model.

00:39:46 But something like converting a matrix,

00:39:49 I think we know how to do that pretty well,

00:39:50 even if it’s a pretty big matrix,

00:39:52 and we understand that pretty well.

00:39:53 And you could train a deep network to do it,

00:39:55 but you shouldn’t probably.

00:39:57 So in that sense, rigorous thinking is understanding

00:40:02 the scope and the limitations of the methods that we have,

00:40:07 like how to use the tools of mathematics properly.

00:40:10 Yeah, I think taking a class on analysis

00:40:15 is all I’m sort of arguing is to take a chance to stop

00:40:18 and force yourself to think rigorously

00:40:20 about even the rational numbers or something.

00:40:25 It doesn’t have to be the end all problem.

00:40:27 But that exercise of clear thinking,

00:40:31 I think goes a long way,

00:40:33 and I just wanna make sure we keep preaching it.

00:40:35 We don’t lose it.

00:40:36 But do you think when you’re doing rigorous thinking

00:40:39 or maybe trying to write down equations

00:40:43 or sort of explicitly formally describe a system,

00:40:47 do you think we naturally simplify things too much?

00:40:51 Is that a danger you run into?

00:40:53 Like in order to be able to understand something

00:40:56 about the system mathematically,

00:40:58 we make it too much of a toy example.

00:41:01 But I think that’s the good stuff, right?

00:41:04 That’s how you understand the fundamentals?

00:41:07 I think so.

00:41:07 I think maybe even that’s a key to intelligence

00:41:10 or something, but I mean, okay,

00:41:12 what if Newton and Galileo had deep learning?

00:41:15 And they had done a bunch of experiments

00:41:18 and they told the world,

00:41:20 here’s your weights of your neural network.

00:41:22 We’ve solved the problem.

00:41:24 Where would we be today?

00:41:25 I don’t think we’d be as far as we are.

00:41:28 There’s something to be said

00:41:29 about having the simplest explanation for a phenomenon.

00:41:32 So I don’t doubt that we can train neural networks

00:41:37 to predict even physical F equals MA type equations.

00:41:46 But I maybe, I want another Newton to come along

00:41:51 because I think there’s more to do

00:41:52 in terms of coming up with the simple models

00:41:56 for more complicated tasks.

00:41:59 Yeah, let’s not offend AI systems from 50 years

00:42:04 from now that are listening to this

00:42:06 that are probably better at,

00:42:08 might be better coming up

00:42:10 with F equals MA equations themselves.

00:42:13 So sorry, I actually think learning is probably a route

00:42:16 to achieving this, but the representation matters, right?

00:42:21 And I think having a function that takes my inputs

00:42:26 to outputs that is arbitrarily complex

00:42:29 may not be the end goal.

00:42:30 I think there’s still the most simple

00:42:34 or parsimonious explanation for the data.

00:42:37 Simple doesn’t mean low dimensional.

00:42:39 That’s one thing I think that we’ve,

00:42:41 a lesson that we’ve learned.

00:42:41 So a standard way to do model reduction

00:42:46 or system identification and controls

00:42:47 is the typical formulation is that you try to find

00:42:50 the minimal state dimension realization of a system

00:42:54 that hits some error bounds or something like that.

00:42:57 And that’s maybe not, I think we’re learning

00:43:00 that state dimension is not the right metric.

00:43:05 Of complexity.

00:43:06 Of complexity.

00:43:07 But for me, I think a lot about contact,

00:43:09 the mechanics of contact,

00:43:10 if a robot hand is picking up an object or something.

00:43:14 And when I write down the equations of motion for that,

00:43:17 they look incredibly complex,

00:43:19 not because, actually not so much

00:43:23 because of the dynamics of the hand when it’s moving,

00:43:26 but it’s just the interactions

00:43:28 and when they turn on and off, right?

00:43:30 So having a high dimensional,

00:43:33 but simple description of what’s happening out here is fine.

00:43:36 But if when I actually start touching,

00:43:38 if I write down a different dynamical system

00:43:41 for every polygon on my robot hand

00:43:45 and every polygon on the object,

00:43:47 whether it’s in contact or not,

00:43:49 with all the combinatorics that explodes there,

00:43:51 then that’s too complex.

00:43:54 So I need to somehow summarize that

00:43:55 with a more intuitive physics way of thinking.

00:44:01 And yeah, I’m very optimistic

00:44:03 that machine learning will get us there.

00:44:05 First of all, I mean, I’ll probably do it

00:44:08 in the introduction,

00:44:09 but you’re one of the great robotics people at MIT.

00:44:12 You’re a professor at MIT.

00:44:14 You’ve teach him a lot of amazing courses.

00:44:16 You run a large group

00:44:19 and you have a important history for MIT, I think,

00:44:22 as being a part of the DARPA Robotics Challenge.

00:44:26 Can you maybe first say,

00:44:28 what is the DARPA Robotics Challenge

00:44:30 and then tell your story around it, your journey with it?

00:44:36 Yeah, sure.

00:44:39 So the DARPA Robotics Challenge,

00:44:41 it came on the tails of the DARPA Grand Challenge

00:44:44 and DARPA Urban Challenge,

00:44:45 which were the challenges that brought us,

00:44:49 put a spotlight on self driving cars.

00:44:55 Gil Pratt was at DARPA and pitched a new challenge

00:45:01 that involved disaster response.

00:45:04 It didn’t explicitly require humanoids,

00:45:07 although humanoids came into the picture.

00:45:10 This happened shortly after the Fukushima disaster in Japan

00:45:14 and our challenge was motivated roughly by that

00:45:17 because that was a case where if we had had robots

00:45:21 that were ready to be sent in,

00:45:22 there’s a chance that we could have averted disaster.

00:45:26 And certainly after the, in the disaster response,

00:45:30 there were times we would have loved

00:45:32 to have sent robots in.

00:45:34 So in practice, what we ended up with was a grand challenge,

00:45:39 a DARPA Robotics Challenge,

00:45:41 where Boston Dynamics was to make humanoid robots.

00:45:48 People like me and the amazing team at MIT

00:45:53 were competing first in a simulation challenge

00:45:56 to try to be one of the ones that wins the right

00:45:59 to work on one of the Boston Dynamics humanoids

00:46:03 in order to compete in the final challenge,

00:46:06 which was a physical challenge.

00:46:08 And at that point, it was already, so it was decided

00:46:11 as humanoid robots early on.

00:46:13 There were two tracks.

00:46:15 You could enter as a hardware team

00:46:16 where you brought your own robot,

00:46:18 or you could enter through the virtual robotics challenge

00:46:21 as a software team that would try to win the right

00:46:24 to use one of the Boston Dynamics robots.

00:46:25 Sure, called Atlas.

00:46:27 Atlas.

00:46:28 Humanoid robots.

00:46:29 Yeah, it was a 400 pound Marvel,

00:46:31 but a pretty big, scary looking robot.

00:46:35 Expensive too.

00:46:36 Expensive, yeah.

00:46:38 Okay, so I mean, how did you feel

00:46:42 at the prospect of this kind of challenge?

00:46:44 I mean, it seems autonomous vehicles,

00:46:48 yeah, I guess that sounds hard,

00:46:51 but not really from a robotics perspective.

00:46:53 It’s like, didn’t they do it in the 80s

00:46:56 is the kind of feeling I would have,

00:46:58 like when you first look at the problem,

00:47:00 it’s on wheels, but like humanoid robots,

00:47:04 that sounds really hard.

00:47:07 So what are your, psychologically speaking,

00:47:12 what were you feeling, excited, scared?

00:47:15 Why the heck did you get yourself involved

00:47:18 in this kind of messy challenge?

00:47:19 We didn’t really know for sure what we were signing up for

00:47:24 in the sense that you could have something that,

00:47:26 as it was described in the call for participation,

00:47:30 that could have put a huge emphasis on the dynamics

00:47:33 of walking and not falling down

00:47:35 and walking over rough terrain,

00:47:37 or the same description,

00:47:38 because the robot had to go into this disaster area

00:47:40 and turn valves and pick up a drill,

00:47:44 it cut the hole through a wall,

00:47:45 it had to do some interesting things.

00:47:48 The challenge could have really highlighted perception

00:47:51 and autonomous planning,

00:47:54 or it ended up that locomoting over complex terrain

00:48:01 played a pretty big role in the competition.

00:48:03 So…

00:48:05 And the degree of autonomy wasn’t clear.

00:48:08 The degree of autonomy

00:48:09 was always a central part of the discussion.

00:48:11 So what wasn’t clear was how we would be able,

00:48:15 how far we’d be able to get with it.

00:48:17 So the idea was always that you want semi autonomy,

00:48:21 that you want the robot to have enough compute

00:48:24 that you can have a degraded network link to a human.

00:48:27 And so the same way we had degraded networks

00:48:30 at many natural disasters,

00:48:33 you’d send your robot in,

00:48:34 you’d be able to get a few bits back and forth,

00:48:37 but you don’t get to have enough

00:48:38 potentially to fully operate the robot

00:48:42 in every joint of the robot.

00:48:44 So, and then the question was,

00:48:46 and the gamesmanship of the organizers

00:48:48 was to figure out what we’re capable of,

00:48:50 push us as far as we could,

00:48:52 so that it would differentiate the teams

00:48:55 that put more autonomy on the robot

00:48:57 and had a few clicks and just said,

00:48:59 go there, do this, go there, do this,

00:49:00 versus someone who’s picking every footstep

00:49:03 or something like that.

00:49:05 So what were some memories,

00:49:10 painful, triumphant from the experience?

00:49:13 Like what was that journey?

00:49:15 Maybe if you can dig in a little deeper,

00:49:17 maybe even on the technical side, on the team side,

00:49:21 that whole process of,

00:49:24 from the early idea stages to actually competing.

00:49:28 I mean, this was a defining experience for me.

00:49:31 It came at the right time for me in my career.

00:49:33 I had gotten tenure before I was due a sabbatical,

00:49:37 and most people do something relaxing

00:49:39 and restorative for a sabbatical.

00:49:41 So you got tenure before this?

00:49:44 Yeah, yeah, yeah.

00:49:46 It was a good time for me.

00:49:48 We had a bunch of algorithms that we were very happy with.

00:49:50 We wanted to see how far we could push them,

00:49:52 and this was a chance to really test our mettle

00:49:54 to do more proper software engineering.

00:49:56 So the team, we all just worked our butts off.

00:50:01 We were in that lab almost all the time.

00:50:07 Okay, so there were some, of course,

00:50:09 high highs and low lows throughout that.

00:50:12 Anytime you’re not sleeping

00:50:13 and devoting your life to a 400 pound humanoid.

00:50:18 I remember actually one funny moment

00:50:20 where we’re all super tired,

00:50:21 and so Atlas had to walk across cinder blocks.

00:50:24 That was one of the obstacles.

00:50:26 And I remember Atlas was powered down

00:50:28 and hanging limp on its harness,

00:50:31 and the humans were there picking up

00:50:34 and laying the brick down

00:50:35 so that the robot could walk over it.

00:50:36 And I thought, what is wrong with this?

00:50:38 We’ve got a robot just watching us

00:50:41 do all the manual labor

00:50:42 so that it can take its little stroll across the train.

00:50:47 But I mean, even the virtual robotics challenge

00:50:52 was super nerve wracking and dramatic.

00:50:54 I remember, so we were using Gazebo as a simulator

00:51:01 on the cloud,

00:51:02 and there was all these interesting challenges.

00:51:03 I think the investment that OSR FC,

00:51:08 whatever they were called at that time,

00:51:10 Brian Gerkey’s team at Open Source Robotics,

00:51:14 they were pushing on the capabilities of Gazebo

00:51:16 in order to scale it to the complexity of these challenges.

00:51:20 So, you know, up to the virtual competition.

00:51:23 So the virtual competition was,

00:51:26 you will sign on at a certain time

00:51:28 and we’ll have a network connection

00:51:29 to another machine on the cloud

00:51:32 that is running the simulator of your robot.

00:51:34 And your controller will run on this computer

00:51:38 and the physics will run on the other

00:51:40 and you have to connect.

00:51:43 Now, the physics, they wanted it to run at real time rates

00:51:48 because there was an element of human interaction.

00:51:50 And humans, if you do want to teleop,

00:51:53 it works way better if it’s at frame rate.

00:51:56 Oh, cool.

00:51:57 But it was very hard to simulate

00:51:58 these complex scenes at real time rate.

00:52:03 So right up to like days before the competition,

00:52:06 the simulator wasn’t quite at real time rate.

00:52:11 And that was great for me because my controller

00:52:13 was solving a pretty big optimization problem

00:52:16 and it wasn’t quite at real time rate.

00:52:17 So I was fine.

00:52:18 I was keeping up with the simulator.

00:52:20 We were both running at about 0.7.

00:52:22 And I remember getting this email.

00:52:24 And by the way, the perception folks on our team hated

00:52:28 that they knew that if my controller was too slow,

00:52:31 the robot was gonna fall down.

00:52:32 And no matter how good their perception system was,

00:52:34 if I can’t make my controller fast.

00:52:36 Anyways, we get this email

00:52:37 like three days before the virtual competition.

00:52:40 It’s for all the marbles.

00:52:41 We’re gonna either get a humanoid robot or we’re not.

00:52:44 And we get an email saying,

00:52:45 good news, we made the robot, the simulator faster.

00:52:48 It’s now at one point.

00:52:50 And I was just like, oh man, what are we gonna do here?

00:52:54 So that came in late at night for me.

00:52:59 A few days ahead.

00:53:00 A few days ahead.

00:53:01 I went over, it happened at Frank Permenter,

00:53:04 who’s a very, very sharp.

00:53:06 He was a student at the time working on optimization.

00:53:11 He was still in lab.

00:53:13 Frank, we need to make the quadratic programming solver

00:53:16 faster, not like a little faster.

00:53:18 It’s actually, you know, and we wrote a new solver

00:53:22 for that QP together that night.

00:53:28 It was terrifying.

00:53:29 So there’s a really hard optimization problem

00:53:31 that you’re constantly solving.

00:53:34 You didn’t make the optimization problem simpler?

00:53:36 You wrote a new solver?

00:53:38 So, I mean, your observation is almost spot on.

00:53:42 What we did was what everybody,

00:53:44 I mean, people know how to do this,

00:53:45 but we had not yet done this idea of warm starting.

00:53:49 So we are solving a big optimization problem

00:53:51 at every time step.

00:53:52 But if you’re running fast enough,

00:53:54 the optimization problem you’re solving

00:53:55 on the last time step is pretty similar

00:53:57 to the optimization you’re gonna solve with the next.

00:54:00 We had course had told our commercial solver

00:54:02 to use warm starting, but even the interface

00:54:05 to that commercial solver was causing us these delays.

00:54:09 So what we did was we basically wrote,

00:54:12 we called it fast QP at the time.

00:54:15 We wrote a very lightweight, very fast layer,

00:54:18 which would basically check if nearby solutions

00:54:22 to the quadratic program were,

00:54:24 which were very easily checked,

00:54:26 could stabilize the robot.

00:54:28 And if they couldn’t, we would fall back to the solver.

00:54:30 You couldn’t really test this well, right?

00:54:33 Or like?

00:54:33 I mean, so we always knew that if we fell back to,

00:54:37 if we, it got to the point where if for some reason

00:54:40 things slowed down and we fell back to the original solver,

00:54:42 the robot would actually literally fall down.

00:54:46 So it was a harrowing sort of edge we were,

00:54:49 ledge we were sort of on.

00:54:51 But I mean, it actually,

00:54:53 like the 400 pound human could come crashing to the ground

00:54:55 if your solver’s not fast enough.

00:54:58 But you know, we had lots of good experiences.

00:55:01 So can I ask you a weird question I get

00:55:06 about idea of hard work?

00:55:09 So actually people, like students of yours

00:55:14 that I’ve interacted with and just,

00:55:17 and robotics people in general,

00:55:19 but they have moments,

00:55:23 at moments have worked harder than most people I know

00:55:28 in terms of, if you look at different disciplines

00:55:30 of how hard people work.

00:55:32 But they’re also like the happiest.

00:55:34 Like, just like, I don’t know.

00:55:37 It’s the same thing with like running.

00:55:39 People that push themselves to like the limit,

00:55:41 they also seem to be like the most like full of life

00:55:44 somehow.

00:55:46 And I get often criticized like,

00:55:48 you’re not getting enough sleep.

00:55:50 What are you doing to your body?

00:55:52 Blah, blah, blah, like this kind of stuff.

00:55:54 And I usually just kind of respond like,

00:55:58 I’m doing what I love.

00:55:59 I’m passionate about it.

00:56:00 I love it.

00:56:01 I feel like it’s, it’s invigorating.

00:56:04 I actually think, I don’t think the lack of sleep

00:56:07 is what hurts you.

00:56:08 I think what hurts you is stress and lack of doing things

00:56:12 that you’re passionate about.

00:56:13 But in this world, yeah, I mean,

00:56:14 can you comment about why the heck robotics people

00:56:20 are willing to push themselves to that degree?

00:56:26 Is there value in that?

00:56:27 And why are they so happy?

00:56:30 I think, I think you got it right.

00:56:31 I mean, I think the causality is not that we work hard.

00:56:36 And I think other disciplines work very hard too,

00:56:38 but it’s, I don’t think it’s that we work hard

00:56:40 and therefore we are happy.

00:56:43 I think we found something

00:56:44 that we’re truly passionate about.

00:56:48 It makes us very happy.

00:56:49 And then we get a little involved with it

00:56:52 and spend a lot of time on it.

00:56:54 What a luxury to have something

00:56:55 that you wanna spend all your time on, right?

00:56:59 We could talk about this for many hours,

00:57:00 but maybe if we could pick,

00:57:03 is there something on the technical side

00:57:05 on the approach that you took that’s interesting

00:57:08 that turned out to be a terrible failure

00:57:10 or a success that you carry into your work today

00:57:13 about all the different ideas that were involved

00:57:17 in making, whether in the simulation or in the real world,

00:57:23 making this semi autonomous system work?

00:57:25 I mean, it really did teach me something fundamental

00:57:30 about what it’s gonna take to get robustness

00:57:33 out of a system of this complexity.

00:57:35 I would say the DARPA challenge

00:57:37 really was foundational in my thinking.

00:57:41 I think the autonomous driving community thinks about this.

00:57:43 I think lots of people thinking

00:57:45 about safety critical systems

00:57:47 that might have machine learning in the loop

00:57:48 are thinking about these questions.

00:57:50 For me, the DARPA challenge was the moment

00:57:53 where I realized we’ve spent every waking minute

00:57:57 running this robot.

00:57:58 And again, for the physical competition,

00:58:01 days before the competition,

00:58:02 we saw the robot fall down in a way

00:58:04 it had never fallen down before.

00:58:05 I thought, how could we have found that?

00:58:10 We only have one robot, it’s running almost all the time.

00:58:13 We just didn’t have enough hours in the day

00:58:15 to test that robot.

00:58:17 Something has to change, right?

00:58:19 And then I think that, I mean,

00:58:21 I would say that the team that won was,

00:58:24 from KAIST, was the team that had two robots

00:58:28 and was able to do not only incredible engineering,

00:58:30 just absolutely top rate engineering,

00:58:33 but also they were able to test at a rate

00:58:36 and discipline that we didn’t keep up with.

00:58:39 What does testing look like?

00:58:41 What are we talking about here?

00:58:42 Like, what’s a loop of tests?

00:58:45 Like from start to finish, what is a loop of testing?

00:58:48 Yeah, I mean, I think there’s a whole philosophy to testing.

00:58:51 There’s the unit tests, and you can do that on a hardware,

00:58:54 you can do that in a small piece of code.

00:58:56 You write one function, you should write a test

00:58:58 that checks that function’s input and outputs.

00:59:00 You should also write an integration test

00:59:02 at the other extreme of running the whole system together,

00:59:05 where they try to turn on all of the different functions

00:59:09 that you think are correct.

00:59:11 It’s much harder to write the specifications

00:59:13 for a system level test,

00:59:14 especially if that system is as complicated

00:59:17 as a humanoid robot.

00:59:18 But the philosophy is sort of the same.

00:59:21 On the real robot, it’s no different,

00:59:24 but on a real robot,

00:59:26 it’s impossible to run the same experiment twice.

00:59:28 So if you see a failure,

00:59:32 you hope you caught something in the logs

00:59:34 that tell you what happened,

00:59:35 but you’d probably never be able to run

00:59:36 exactly that experiment again.

00:59:39 And right now, I think our philosophy is just,

00:59:45 basically Monte Carlo estimation,

00:59:47 is just run as many experiments as we can,

00:59:50 maybe try to set up the environment

00:59:53 to make the things we are worried about happen

00:59:58 as often as possible.

00:59:59 But really we’re relying on somewhat random search

01:00:02 in order to test.

01:00:04 Maybe that’s all we’ll ever be able to,

01:00:05 but I think, you know,

01:00:07 cause there’s an argument that the things that’ll get you

01:00:10 are the things that are really nuanced in the world.

01:00:14 And there’d be very hard to, for instance,

01:00:15 put back in a simulation.

01:00:16 Yeah, I guess the edge cases.

01:00:19 What was the hardest thing?

01:00:21 Like, so you said walking over rough terrain,

01:00:24 like just taking footsteps.

01:00:27 I mean, people, it’s so dramatic and painful

01:00:31 in a certain kind of way to watch these videos

01:00:33 from the DRC of robots falling.

01:00:37 Yep.

01:00:38 It’s just so heartbreaking.

01:00:39 I don’t know.

01:00:40 Maybe it’s because for me at least,

01:00:42 we anthropomorphize the robot.

01:00:45 Of course, it’s also funny for some reason,

01:00:48 like humans falling is funny for, I don’t,

01:00:51 it’s some dark reason.

01:00:53 I’m not sure why it is so,

01:00:55 but it’s also like tragic and painful.

01:00:57 And so speaking of which, I mean,

01:01:00 what made the robots fall and fail in your view?

01:01:05 So I can tell you exactly what happened on our,

01:01:06 we, I contributed one of those.

01:01:08 Our team contributed one of those spectacular falls.

01:01:10 Every one of those falls has a complicated story.

01:01:15 I mean, at one time,

01:01:16 the power effectively went out on the robot

01:01:20 because it had been sitting at the door

01:01:21 waiting for a green light to be able to proceed

01:01:24 and its batteries, you know,

01:01:26 and therefore it just fell backwards

01:01:28 and smashed its head against the ground.

01:01:29 And it was hilarious,

01:01:30 but it wasn’t because of bad software, right?

01:01:34 But for ours, so the hardest part of the challenge,

01:01:37 the hardest task in my view was getting out of the Polaris.

01:01:40 It was actually relatively easy to drive the Polaris.

01:01:43 Can you tell the story?

01:01:44 Sorry to interrupt.

01:01:45 The story of the car.

01:01:50 People should watch this video.

01:01:51 I mean, the thing you’ve come up with is just brilliant,

01:01:53 but anyway, sorry, what’s…

01:01:55 Yeah, we kind of joke.

01:01:56 We call it the big robot, little car problem

01:01:59 because somehow the race organizers decided

01:02:03 to give us a 400 pound humanoid.

01:02:05 And then they also provided the vehicle,

01:02:07 which was a little Polaris.

01:02:08 And the robot didn’t really fit in the car.

01:02:11 So you couldn’t drive the car with your feet

01:02:14 under the steering column.

01:02:15 We actually had to straddle the main column of the,

01:02:21 and have basically one foot in the passenger seat,

01:02:23 one foot in the driver’s seat,

01:02:25 and then drive with our left hand.

01:02:28 But the hard part was we had to then park the car,

01:02:31 get out of the car.

01:02:33 It didn’t have a door, that was okay.

01:02:34 But it’s just getting up from crouched, from sitting,

01:02:38 when you’re in this very constrained environment.

01:02:41 First of all, I remember after watching those videos,

01:02:44 I was much more cognizant of how hard it is for me

01:02:47 to get in and out of the car,

01:02:49 and out of the car, especially.

01:02:51 It’s actually a really difficult control problem.

01:02:54 Yeah.

01:02:55 I’m very cognizant of it when I’m like injured

01:02:58 for whatever reason.

01:02:59 Oh, that’s really hard.

01:03:00 Yeah.

01:03:01 So how did you approach this problem?

01:03:03 So we had, you think of NASA’s operations,

01:03:08 and they have these checklists,

01:03:09 prelaunched checklists and the like.

01:03:11 We weren’t far off from that.

01:03:12 We had this big checklist.

01:03:13 And on the first day of the competition,

01:03:16 we were running down our checklist.

01:03:17 And one of the things we had to do,

01:03:19 we had to turn off the controller,

01:03:21 the piece of software that was running

01:03:23 that would drive the left foot of the robot

01:03:25 in order to accelerate on the gas.

01:03:28 And then we turned on our balancing controller.

01:03:30 And the nerves, jitters of the first day of the competition,

01:03:34 someone forgot to check that box

01:03:35 and turn that controller off.

01:03:37 So we used a lot of motion planning

01:03:40 to figure out a sort of configuration of the robot

01:03:45 that we could get up and over.

01:03:47 We relied heavily on our balancing controller.

01:03:50 And basically, when the robot was in one

01:03:53 of its most precarious sort of configurations,

01:03:57 trying to sneak its big leg out of the side,

01:04:01 the other controller that thought it was still driving

01:04:05 told its left foot to go like this.

01:04:06 And that wasn’t good.

01:04:11 But it turned disastrous for us

01:04:13 because what happened was a little bit of push here.

01:04:16 Actually, we have videos of us running into the robot

01:04:21 with a 10 foot pole and it kind of will recover.

01:04:24 But this is a case where there’s no space to recover.

01:04:27 So a lot of our secondary balancing mechanisms

01:04:30 about like take a step to recover,

01:04:32 they were all disabled because we were in the car

01:04:33 and there was no place to step.

01:04:35 So we were relying on our just lowest level reflexes.

01:04:38 And even then, I think just hitting the foot on the seat,

01:04:42 on the floor, we probably could have recovered from it.

01:04:44 But the thing that was bad that happened

01:04:46 is when we did that and we jostled a little bit,

01:04:49 the tailbone of our robot was only a little off the seat,

01:04:53 it hit the seat.

01:04:55 And the other foot came off the ground just a little bit.

01:04:58 And nothing in our plans had ever told us what to do

01:05:02 if your butt’s on the seat and your feet are in the air.

01:05:05 Feet in the air.

01:05:06 And then the thing is once you get off the script,

01:05:10 things can go very wrong

01:05:11 because even our state estimation,

01:05:12 our system that was trying to collect all the data

01:05:15 from the sensors and understand

01:05:16 what’s happening with the robot,

01:05:18 it didn’t know about this situation.

01:05:20 So it was predicting things that were just wrong.

01:05:22 And then we did a violent shake and fell off

01:05:26 in our face first out of the robot.

01:05:29 But like into the destination.

01:05:32 That’s true, we fell in, we got our point for egress.

01:05:36 But so is there any hope for, that’s interesting,

01:05:39 is there any hope for Atlas to be able to do something

01:05:43 when it’s just on its butt and feet in the air?

01:05:46 Absolutely.

01:05:47 So you can, what do you?

01:05:48 No, so that is one of the big challenges.

01:05:50 And I think it’s still true, you know,

01:05:53 Boston Dynamics and Antimal and there’s this incredible work

01:05:59 on legged robots happening around the world.

01:06:04 Most of them still are very good at the case

01:06:07 where you’re making contact with the world at your feet.

01:06:10 And they have typically point feet relatively,

01:06:12 they have balls on their feet, for instance.

01:06:14 If those robots get in a situation

01:06:16 where the elbow hits the wall or something like this,

01:06:19 that’s a pretty different situation.

01:06:21 Now they have layers of mechanisms that will make,

01:06:24 I think the more mature solutions have ways

01:06:27 in which the controller won’t do stupid things.

01:06:31 But a human, for instance, is able to leverage

01:06:34 incidental contact in order to accomplish a goal.

01:06:36 In fact, I might, if you push me,

01:06:37 I might actually put my hand out

01:06:39 and make a new brand new contact.

01:06:42 The feet of the robot are doing this on quadrupeds,

01:06:44 but we mostly in robotics are afraid of contact

01:06:49 on the rest of our body, which is crazy.

01:06:53 There’s this whole field of motion planning,

01:06:56 collision free motion planning.

01:06:58 And we write very complex algorithms

01:06:59 so that the robot can dance around

01:07:01 and make sure it doesn’t touch the world.

01:07:05 So people are just afraid of contact

01:07:07 because contact the scene is a difficult.

01:07:09 It’s still a difficult control problem and sensing problem.

01:07:13 Now you’re a serious person, I’m a little bit of an idiot

01:07:21 and I’m going to ask you some dumb questions.

01:07:24 So I do martial arts.

01:07:27 So like jiu jitsu, I wrestled my whole life.

01:07:30 So let me ask the question, like whenever people learn

01:07:35 that I do any kind of AI or like I mentioned robots

01:07:38 and things like that, they say,

01:07:40 when are we going to have robots that can win

01:07:45 in a wrestling match or in a fight against a human?

01:07:49 So we just mentioned sitting on your butt,

01:07:52 if you’re in the air, that’s a common position.

01:07:53 Jiu jitsu, when you’re on the ground,

01:07:55 you’re a down opponent.

01:07:59 Like how difficult do you think is the problem?

01:08:03 And when will we have a robot that can defeat a human

01:08:06 in a wrestling match?

01:08:08 And we’re talking about a lot, like, I don’t know

01:08:11 if you’re familiar with wrestling, but essentially.

01:08:15 Not very.

01:08:16 It’s basically the art of contact.

01:08:19 It’s like, it’s because you’re picking contact points

01:08:24 and then using like leverage like to off balance

01:08:29 to trick people, like you make them feel

01:08:33 like you’re doing one thing

01:08:35 and then they change their balance

01:08:38 and then you switch what you’re doing

01:08:41 and then results in a throw or whatever.

01:08:44 So like, it’s basically the art of multiple contacts.

01:08:48 So.

01:08:49 Awesome, that’s a nice description of it.

01:08:50 So there’s also an opponent in there, right?

01:08:53 So if.

01:08:54 Very dynamic.

01:08:55 Right, if you are wrestling a human

01:08:58 and are in a game theoretic situation with a human,

01:09:02 that’s still hard, but just to speak to the, you know,

01:09:08 quickly reasoning about contact part of it, for instance.

01:09:11 Yeah, maybe even throwing the game theory out of it,

01:09:13 almost like, yeah, almost like a non dynamic opponent.

01:09:17 Right, there’s reasons to be optimistic,

01:09:20 but I think our best understanding of those problems

01:09:22 are still pretty hard.

01:09:24 I have been increasingly focused on manipulation,

01:09:29 partly where that’s a case where the contact

01:09:31 has to be much more rich.

01:09:35 And there are some really impressive examples

01:09:38 of deep learning policies, controllers

01:09:41 that can appear to do good things through contact.

01:09:47 We’ve even got new examples of, you know,

01:09:51 deep learning models of predicting what’s gonna happen

01:09:53 to objects as they go through contact.

01:09:56 But I think the challenge you just offered there

01:09:59 still eludes us, right?

01:10:01 The ability to make a decision

01:10:03 based on those models quickly.

01:10:07 You know, I have to think though, it’s hard for humans too,

01:10:10 when you get that complicated.

01:10:11 I think probably you had maybe a slow motion version

01:10:16 of where you learned the basic skills

01:10:17 and you’ve probably gotten better at it

01:10:20 and there’s much more subtle to you.

01:10:24 But it might still be hard to actually, you know,

01:10:27 really on the fly take a, you know, model of your humanoid

01:10:32 and figure out how to plan the optimal sequence.

01:10:35 That might be a problem we never solve.

01:10:36 Well, the, I mean, one of the most amazing things to me

01:10:40 about the, we can talk about martial arts.

01:10:43 We could also talk about dancing.

01:10:45 Doesn’t really matter.

01:10:46 Too human, I think it’s the most interesting study

01:10:50 of contact.

01:10:51 It’s not even the dynamic element of it.

01:10:53 It’s the, like when you get good at it, it’s so effortless.

01:10:58 Like I can just, I’m very cognizant

01:11:00 of the entirety of the learning process

01:11:03 being essentially like learning how to move my body

01:11:07 in a way that I could throw very large weights

01:11:12 around effortlessly, like, and I can feel the learning.

01:11:18 Like I’m a huge believer in drilling of techniques

01:11:21 and you can just like feel your, I don’t,

01:11:23 you’re not feeling, you’re feeling, sorry,

01:11:26 you’re learning it intellectually a little bit,

01:11:29 but a lot of it is the body learning it somehow,

01:11:32 like instinctually and whatever that learning is,

01:11:36 that’s really, I’m not even sure if that’s equivalent

01:11:40 to like a deep learning, learning a controller.

01:11:44 I think it’s something more,

01:11:46 it feels like there’s a lot of distributed learning

01:11:49 going on.

01:11:50 Yeah, I think there’s hierarchy and composition

01:11:56 probably in the systems that we don’t capture very well yet.

01:12:00 You have layers of control systems.

01:12:02 You have reflexes at the bottom layer

01:12:03 and you have a system that’s capable

01:12:07 of planning a vacation to some distant country,

01:12:11 which is probably, you probably don’t have a controller,

01:12:14 a policy for every possible destination you’ll ever pick.

01:12:18 Right?

01:12:20 But there’s something magical in the in between

01:12:23 and how do you go from these low level feedback loops

01:12:26 to something that feels like a pretty complex

01:12:30 set of outcomes.

01:12:32 You know, my guess is, I think there’s evidence

01:12:34 that you can plan at some of these levels, right?

01:12:37 So Josh Tenenbaum just showed it in his talk the other day.

01:12:41 He’s got a game he likes to talk about.

01:12:43 I think he calls it the pick three game or something,

01:12:46 where he puts a bunch of clutter down in front of a person

01:12:50 and he says, okay, pick three objects.

01:12:52 And it might be a telephone or a shoe

01:12:55 or a Kleenex box or whatever.

01:12:59 And apparently you pick three items and then you pick,

01:13:01 he says, okay, pick the first one up with your right hand,

01:13:04 the second one up with your left hand.

01:13:06 Now using those objects, now as tools,

01:13:08 pick up the third object.

01:13:11 Right, so that’s down at the level of physics

01:13:15 and mechanics and contact mechanics

01:13:17 that I think we do learning or we do have policies for,

01:13:21 we do control for, almost feedback,

01:13:24 but somehow we’re able to still,

01:13:26 I mean, I’ve never picked up a telephone

01:13:28 with a shoe and a water bottle before.

01:13:30 And somehow, and it takes me a little longer to do that

01:13:33 the first time, but most of the time

01:13:35 we can sort of figure that out.

01:13:37 So yeah, I think the amazing thing is this ability

01:13:41 to be flexible with our models,

01:13:44 plan when we need to use our well oiled controllers

01:13:48 when we don’t, when we’re in familiar territory.

01:13:53 Having models, I think the other thing you just said

01:13:55 was something about, I think your awareness

01:13:58 of what’s happening is even changing

01:13:59 as you improve your expertise, right?

01:14:02 So maybe you have a very approximate model

01:14:04 of the mechanics to begin with.

01:14:06 And as you gain expertise,

01:14:09 you get a more refined version of that model.

01:14:11 You’re aware of muscles or balance components

01:14:17 that you just weren’t even aware of before.

01:14:19 So how do you scaffold that?

01:14:21 Yeah, plus the fear of injury,

01:14:24 the ambition of goals, of excelling,

01:14:28 and fear of mortality.

01:14:32 Let’s see, what else is in there?

01:14:33 As the motivations, overinflated ego in the beginning,

01:14:38 and then a crash of confidence in the middle.

01:14:42 All of those seem to be essential for the learning process.

01:14:46 And if all that’s good,

01:14:48 then you’re probably optimizing energy efficiency.

01:14:50 Yeah, right, so we have to get that right.

01:14:53 So there was this idea that you would have robots

01:14:58 play soccer better than human players by 2050.

01:15:03 That was the goal.

01:15:05 Basically, it was the goal to beat world champion team,

01:15:10 to become a world cup, beat like a world cup level team.

01:15:13 So are we gonna see that first?

01:15:15 Or a robot, if you’re familiar,

01:15:19 there’s an organization called UFC for mixed martial arts.

01:15:23 Are we gonna see a world cup championship soccer team

01:15:27 that have robots, or a UFC champion mixed martial artist

01:15:32 as a robot?

01:15:33 I mean, it’s very hard to say one thing is harder,

01:15:37 some problem is harder than the other.

01:15:38 What probably matters is who started the organization that,

01:15:44 I mean, I think RoboCup has a pretty serious following,

01:15:47 and there is a history now of people playing that game,

01:15:50 learning about that game, building robots to play that game,

01:15:53 building increasingly more human robots.

01:15:55 It’s got momentum.

01:15:57 So if you want to have mixed martial arts compete,

01:16:00 you better start your organization now, right?

01:16:05 I think almost independent of which problem

01:16:07 is technically harder,

01:16:08 because they’re both hard and they’re both different.

01:16:11 That’s a good point.

01:16:12 I mean, those videos are just hilarious,

01:16:14 like especially the humanoid robots

01:16:17 trying to play soccer.

01:16:21 I mean, they’re kind of terrible right now.

01:16:23 I mean, I guess there is robo sumo wrestling.

01:16:26 There’s like the robo one competitions,

01:16:28 where they do have these robots that go on the table

01:16:31 and basically fight.

01:16:32 So maybe I’m wrong, maybe.

01:16:33 First of all, do you have a year in mind for RoboCup,

01:16:37 just from a robotics perspective?

01:16:39 Seems like a super exciting possibility

01:16:42 that like in the physical space,

01:16:46 this is what’s interesting.

01:16:47 I think the world is captivated.

01:16:50 I think it’s really exciting.

01:16:52 It inspires just a huge number of people

01:16:56 when a machine beats a human at a game

01:17:01 that humans are really damn good at.

01:17:03 So you’re talking about chess and go,

01:17:05 but that’s in the world of digital.

01:17:09 I don’t think machines have beat humans

01:17:13 at a game in the physical space yet,

01:17:16 but that would be just.

01:17:17 You have to make the rules very carefully, right?

01:17:20 I mean, if Atlas kicked me in the shins, I’m down

01:17:22 and game over.

01:17:25 So it’s very subtle on what’s fair.

01:17:31 I think the fighting one is a weird one.

01:17:33 Yeah, because you’re talking about a machine

01:17:35 that’s much stronger than you.

01:17:36 But yeah, in terms of soccer, basketball, all those kinds.

01:17:39 Even soccer, right?

01:17:40 I mean, as soon as there’s contact or whatever,

01:17:43 and there are some things that the robot will do better.

01:17:46 I think if you really set yourself up to try to see

01:17:51 could robots win the game of soccer

01:17:53 as the rules were written, the right thing

01:17:56 for the robot to do is to play very differently

01:17:58 than a human would play.

01:17:59 You’re not gonna get the perfect soccer player robot.

01:18:04 You’re gonna get something that exploits the rules,

01:18:07 exploits its super actuators, its super low bandwidth

01:18:13 feedback loops or whatever, and it’s gonna play the game

01:18:15 differently than you want it to play.

01:18:17 And I bet there’s ways, I bet there’s loopholes, right?

01:18:21 We saw that in the DARPA challenge that it’s very hard

01:18:27 to write a set of rules that someone can’t find

01:18:30 a way to exploit.

01:18:32 Let me ask another ridiculous question.

01:18:35 I think this might be the last ridiculous question,

01:18:37 but I doubt it.

01:18:39 I aspire to ask as many ridiculous questions

01:18:44 of a brilliant MIT professor.

01:18:48 Okay, I don’t know if you’ve seen the black mirror.

01:18:53 It’s funny, I never watched the episode.

01:18:56 I know when it happened though, because I gave a talk

01:19:00 to some MIT faculty one day on a unassuming Monday

01:19:05 or whatever I was telling him about the state of robotics.

01:19:08 And I showed some video from Boston Dynamics

01:19:10 of the quadruped spot at the time.

01:19:13 It was the early version of spot.

01:19:15 And there was a look of horror that went across the room.

01:19:19 And I said, I’ve shown videos like this a lot of times,

01:19:23 what happened?

01:19:24 And it turns out that this video had gone,

01:19:26 this black mirror episode had changed

01:19:28 the way people watched the videos I was putting out.

01:19:33 The way they see these kinds of robots.

01:19:34 So I talked to so many people who are just terrified

01:19:37 because of that episode probably of these kinds of robots.

01:19:41 I almost wanna say that they almost enjoy being terrified.

01:19:44 I don’t even know what it is about human psychology

01:19:47 that kind of imagine doomsday,

01:19:49 the destruction of the universe or our society

01:19:52 and kind of like enjoy being afraid.

01:19:57 I don’t wanna simplify it, but it feels like

01:19:59 they talk about it so often.

01:20:01 It almost, there does seem to be an addictive quality to it.

01:20:06 I talked to a guy, a guy named Joe Rogan,

01:20:09 who’s kind of the flag bearer

01:20:11 for being terrified at these robots.

01:20:14 Do you have two questions?

01:20:17 One, do you have an understanding

01:20:18 of why people are afraid of robots?

01:20:21 And the second question is in black mirror,

01:20:24 just to tell you the episode,

01:20:26 I don’t even remember it that much anymore,

01:20:28 but these robots, I think they can shoot

01:20:31 like a pellet or something.

01:20:32 They basically have, it’s basically a spot with a gun.

01:20:36 And how far are we away from having robots

01:20:41 that go rogue like that?

01:20:44 Basically spot that goes rogue for some reason

01:20:48 and somehow finds a gun.

01:20:51 Right, so, I mean, I’m not a psychologist.

01:20:56 I think, I don’t know exactly why

01:20:59 people react the way they do.

01:21:01 I think we have to be careful about the way robots influence

01:21:06 our society and the like.

01:21:07 I think that’s something, that’s a responsibility

01:21:09 that roboticists need to embrace.

01:21:13 I don’t think robots are gonna come after me

01:21:15 with a kitchen knife or a pellet gun right away.

01:21:18 And I mean, if they were programmed in such a way,

01:21:21 but I used to joke with Atlas that all I had to do

01:21:25 was run for five minutes and its battery would run out.

01:21:28 But actually they’ve got to be careful

01:21:30 and actually they’ve got a very big battery

01:21:32 in there by the end.

01:21:33 So it was over an hour.

01:21:37 I think the fear is a bit cultural though.

01:21:39 Cause I mean, you notice that, like, I think in my age,

01:21:45 in the US, we grew up watching Terminator, right?

01:21:48 If I had grown up at the same time in Japan,

01:21:50 I probably would have been watching Astro Boy.

01:21:52 And there’s a very different reaction to robots

01:21:55 in different countries, right?

01:21:57 So I don’t know if it’s a human innate fear of metal marvels

01:22:02 or if it’s something that we’ve done to ourselves

01:22:06 with our sci fi.

01:22:09 Yeah, the stories we tell ourselves through movies,

01:22:12 through just through popular media.

01:22:16 But if I were to tell, you know, if you were my therapist

01:22:21 and I said, I’m really terrified that we’re going

01:22:24 to have these robots very soon that will hurt us.

01:22:30 Like, how do you approach making me feel better?

01:22:36 Like, why shouldn’t people be afraid?

01:22:39 There’s a, I think there’s a video

01:22:41 that went viral recently.

01:22:44 Everything, everything was spot in Boston,

01:22:46 which goes viral in general.

01:22:48 But usually it’s like really cool stuff.

01:22:50 Like they’re doing flips and stuff

01:22:51 or like sad stuff, the Atlas being hit with a broomstick

01:22:56 or something like that.

01:22:57 But there’s a video where I think one of the new productions

01:23:02 bought robots, which are awesome.

01:23:04 It was like patrolling somewhere in like in some country.

01:23:08 And like people immediately were like saying like,

01:23:11 this is like the dystopian future,

01:23:14 like the surveillance state.

01:23:16 For some reason, like you can just have a camera,

01:23:18 like something about spot being able to walk on four feet

01:23:23 with like really terrified people.

01:23:25 So like, what do you say to those people?

01:23:31 I think there is a legitimate fear there

01:23:33 because so much of our future is uncertain.

01:23:37 But at the same time, technically speaking,

01:23:40 it seems like we’re not there yet.

01:23:41 So what do you say?

01:23:42 I mean, I think technology is complicated.

01:23:48 It can be used in many ways.

01:23:49 I think there are purely software attacks

01:23:56 that somebody could use to do great damage.

01:23:59 Maybe they have already, you know,

01:24:01 I think wheeled robots could be used in bad ways too.

01:24:08 Drones.

01:24:09 Drones, right, I don’t think that, let’s see.

01:24:16 I don’t want to be building technology

01:24:19 just because I’m compelled to build technology

01:24:21 and I don’t think about it.

01:24:23 But I would consider myself a technological optimist,

01:24:27 I guess, in the sense that I think we should continue

01:24:32 to create and evolve and our world will change.

01:24:37 And if we will introduce new challenges,

01:24:40 we’ll screw something up maybe,

01:24:42 but I think also we’ll invent ourselves

01:24:46 out of those challenges and life will go on.

01:24:49 So it’s interesting because you didn’t mention

01:24:51 like this is technically too hard.

01:24:54 I don’t think robots are, I think people attribute

01:24:57 a robot that looks like an animal

01:24:59 as maybe having a level of self awareness

01:25:02 or consciousness or something that they don’t have yet.

01:25:05 Right, so it’s not, I think our ability

01:25:09 to anthropomorphize those robots is probably,

01:25:13 we’re assuming that they have a level of intelligence

01:25:16 that they don’t yet have.

01:25:17 And that might be part of the fear.

01:25:20 So in that sense, it’s too hard.

01:25:22 But, you know, there are many scary things in the world.

01:25:25 Right, so I think we’re right to ask those questions.

01:25:29 We’re right to think about the implications of our work.

01:25:33 Right, in the short term as we’re working on it for sure,

01:25:39 is there something long term that scares you

01:25:43 about our future with AI and robots?

01:25:47 A lot of folks from Elon Musk to Sam Harris

01:25:52 to a lot of folks talk about the existential threats

01:25:56 about artificial intelligence.

01:25:58 Oftentimes, robots kind of inspire that the most

01:26:03 because of the anthropomorphism.

01:26:05 Do you have any fears?

01:26:07 It’s an important question.

01:26:12 I actually, I think I like Rod Brooks answer

01:26:14 maybe the best on this, I think.

01:26:17 And it’s not the only answer he’s given over the years,

01:26:19 but maybe one of my favorites is he says,

01:26:24 it’s not gonna be, he’s got a book,

01:26:25 Flesh and Machines, I believe, it’s not gonna be

01:26:29 the robots versus the people,

01:26:31 we’re all gonna be robot people.

01:26:34 Because, you know, we already have smartphones,

01:26:38 some of us have serious technology implanted

01:26:41 in our bodies already, whether we have a hearing aid

01:26:43 or a pacemaker or anything like this,

01:26:47 people with amputations might have prosthetics.

01:26:50 And that’s a trend I think that is likely to continue.

01:26:57 I mean, this is now wild speculation.

01:27:01 But I mean, when do we get to cognitive implants

01:27:05 and the like, and.

01:27:06 Yeah, with neural link, brain computer interfaces,

01:27:09 that’s interesting.

01:27:10 So there’s a dance between humans and robots

01:27:12 that’s going to be, it’s going to be impossible

01:27:17 to be scared of the other out there, the robot,

01:27:23 because the robot will be part of us, essentially.

01:27:26 It’d be so intricately sort of part of our society that.

01:27:30 Yeah, and it might not even be implanted part of us,

01:27:33 but just, it’s so much a part of our, yeah, our society.

01:27:37 So in that sense, the smartphone is already the robot

01:27:39 we should be afraid of, yeah.

01:27:41 I mean, yeah, and all the usual fears arise

01:27:45 of the misinformation, the manipulation,

01:27:51 all those kinds of things that,

01:27:56 the problems are all the same.

01:27:57 They’re human problems, essentially, it feels like.

01:28:00 Yeah, I mean, I think the way we interact

01:28:03 with each other online is changing the value we put on,

01:28:07 you know, personal interaction.

01:28:08 And that’s a crazy big change that’s going to happen

01:28:11 and rip through our, has already been ripping

01:28:13 through our society, right?

01:28:14 And that has implications that are massive.

01:28:18 I don’t know if they should be scared of it

01:28:19 or go with the flow, but I don’t see, you know,

01:28:24 some battle lines between humans and robots

01:28:26 being the first thing to worry about.

01:28:29 I mean, I do want to just, as a kind of comment,

01:28:33 maybe you can comment about your just feelings

01:28:35 about Boston Dynamics in general, but you know,

01:28:38 I love science, I love engineering,

01:28:40 I think there’s so many beautiful ideas in it.

01:28:42 And when I look at Boston Dynamics

01:28:45 or legged robots in general,

01:28:47 I think they inspire people, curiosity and feelings

01:28:54 in general, excitement about engineering

01:28:57 more than almost anything else in popular culture.

01:29:00 And I think that’s such an exciting,

01:29:03 like responsibility and possibility for robotics.

01:29:06 And Boston Dynamics is riding that wave pretty damn well.

01:29:10 Like they found it, they’ve discovered that hunger

01:29:13 and curiosity in the people and they’re doing magic with it.

01:29:17 I don’t care if the, I mean, I guess is that their company,

01:29:19 they have to make money, right?

01:29:21 But they’re already doing incredible work

01:29:24 and inspiring the world about technology.

01:29:26 I mean, do you have thoughts about Boston Dynamics

01:29:30 and maybe others, your own work in robotics

01:29:34 and inspiring the world in that way?

01:29:36 I completely agree, I think Boston Dynamics

01:29:40 is absolutely awesome.

01:29:42 I think I show my kids those videos, you know,

01:29:46 and the best thing that happens is sometimes

01:29:48 they’ve already seen them, you know, right?

01:29:50 I think, I just think it’s a pinnacle of success

01:29:55 in robotics that is just one of the best things

01:29:58 that’s happened, absolutely completely agree.

01:30:01 One of the heartbreaking things to me is how many

01:30:06 robotics companies fail, how hard it is to make money

01:30:11 with a robotics company.

01:30:13 Like iRobot like went through hell just to arrive

01:30:17 at a Roomba to figure out one product.

01:30:19 And then there’s so many home robotics companies

01:30:23 like Jibo and Anki, Anki, the cutest toy that’s a great robot

01:30:32 I thought went down, I’m forgetting a bunch of them,

01:30:36 but a bunch of robotics companies fail,

01:30:37 Rod’s company, Rethink Robotics.

01:30:42 Like, do you have anything hopeful to say

01:30:47 about the possibility of making money with robots?

01:30:50 Oh, I think you can’t just look at the failures.

01:30:54 I mean, Boston Dynamics is a success.

01:30:55 There’s lots of companies that are still doing amazingly

01:30:58 good work in robotics.

01:31:01 I mean, this is the capitalist ecology or something, right?

01:31:05 I think you have many companies, you have many startups

01:31:07 and they push each other forward and many of them fail

01:31:11 and some of them get through and that’s sort of

01:31:13 the natural way of those things.

01:31:17 I don’t know that is robotics really that much worse.

01:31:20 I feel the pain that you feel too.

01:31:22 Every time I read one of these, sometimes it’s friends

01:31:26 and I definitely wish it went better or went differently.

01:31:33 But I think it’s healthy and good to have bursts of ideas,

01:31:38 bursts of activities, ideas, if they are really aggressive,

01:31:41 they should fail sometimes.

01:31:45 Certainly that’s the research mantra, right?

01:31:46 If you’re succeeding at every problem you attempt,

01:31:50 then you’re not choosing aggressively enough.

01:31:53 Is it exciting to you, the new spot?

01:31:55 Oh, it’s so good.

01:31:57 When are you getting them as a pet or it?

01:32:00 Yeah, I mean, I have to dig up 75K right now.

01:32:03 I mean, it’s so cool that there’s a price tag,

01:32:05 you can go and then actually buy it.

01:32:08 I have a Skydio R1, love it.

01:32:11 So no, I would absolutely be a customer.

01:32:18 I wonder what your kids would think about it.

01:32:20 I actually, Zach from Boston Dynamics would let my kid drive

01:32:25 in one of their demos one time.

01:32:27 And that was just so good, so good.

01:32:31 And again, I’ll forever be grateful for that.

01:32:34 And there’s something magical about the anthropomorphization

01:32:37 of that arm, it adds another level of human connection.

01:32:42 I’m not sure we understand from a control aspect,

01:32:47 the value of anthropomorphization.

01:32:51 I think that’s an understudied

01:32:53 and under understood engineering problem.

01:32:57 There’s been a, like psychologists have been studying it.

01:33:00 I think it’s part like manipulating our mind

01:33:02 to believe things is a valuable engineering.

01:33:06 Like this is another degree of freedom

01:33:08 that can be controlled.

01:33:09 I like that, yeah, I think that’s right.

01:33:11 I think there’s something that humans seem to do

01:33:16 or maybe my dangerous introspection is,

01:33:20 I think we are able to make very simple models

01:33:23 that assume a lot about the world very quickly.

01:33:27 And then it takes us a lot more time, like you’re wrestling.

01:33:31 You probably thought you knew what you were doing

01:33:33 with wrestling and you were fairly functional

01:33:35 as a complete wrestler.

01:33:36 And then you slowly got more expertise.

01:33:39 So maybe it’s natural that our first level of defense

01:33:45 against seeing a new robot is to think of it

01:33:48 in our existing models of how humans and animals behave.

01:33:52 And it’s just, as you spend more time with it,

01:33:55 then you’ll develop more sophisticated models

01:33:56 that will appreciate the differences.

01:34:00 Exactly.

01:34:01 Can you say what does it take to control a robot?

01:34:05 Like what is the control problem of a robot?

01:34:08 And in general, what is a robot in your view?

01:34:10 Like how do you think of this system?

01:34:15 What is a robot?

01:34:16 What is a robot?

01:34:17 I think robotics.

01:34:18 I told you ridiculous questions.

01:34:20 No, no, it’s good.

01:34:21 I mean, there’s standard definitions

01:34:22 of combining computation with some ability

01:34:27 to do mechanical work.

01:34:29 I think that gets us pretty close.

01:34:30 But I think robotics has this problem

01:34:34 that once things really work,

01:34:37 we don’t call them robots anymore.

01:34:38 Like my dishwasher at home is pretty sophisticated,

01:34:44 beautiful mechanisms.

01:34:45 There’s actually a pretty good computer,

01:34:46 probably a couple of chips in there doing amazing things.

01:34:49 We don’t think of that as a robot anymore,

01:34:51 which isn’t fair.

01:34:52 Because then what roughly it means

01:34:53 that robotics always has to solve the next problem

01:34:58 and doesn’t get to celebrate its past successes.

01:35:00 I mean, even factory room floor robots

01:35:05 are super successful.

01:35:06 They’re amazing.

01:35:08 But that’s not the ones,

01:35:09 I mean, people think of them as robots,

01:35:10 but they don’t,

01:35:11 if you ask what are the successes of robotics,

01:35:14 somehow it doesn’t come to your mind immediately.

01:35:17 So the definition of robot is a system

01:35:20 with some level of automation that fails frequently.

01:35:23 Something like, it’s the computation plus mechanical work

01:35:28 and an unsolved problem.

01:35:30 It’s an unsolved problem, yeah.

01:35:32 So from a perspective of control and mechanics,

01:35:37 dynamics, what is a robot?

01:35:40 So there are many different types of robots.

01:35:42 The control that you need for a Jibo robot,

01:35:47 you know, some robot that’s sitting on your countertop

01:35:50 and interacting with you, but not touching you,

01:35:53 for instance, is very different than what you need

01:35:55 for an autonomous car or an autonomous drone.

01:35:59 It’s very different than what you need for a robot

01:36:01 that’s gonna walk or pick things up with its hands, right?

01:36:04 My passion has always been for the places

01:36:09 where you’re interacting more,

01:36:10 you’re doing more dynamic interactions with the world.

01:36:13 So walking, now manipulation.

01:36:18 And the control problems there are beautiful.

01:36:21 I think contact is one thing that differentiates them

01:36:25 from many of the control problems we’ve solved classically,

01:36:29 right, like modern control grew up stabilizing fighter jets

01:36:32 that were passively unstable,

01:36:34 and there’s like amazing success stories from control

01:36:37 all over the place.

01:36:39 Power grid, I mean, there’s all kinds of,

01:36:41 it’s everywhere that we don’t even realize,

01:36:44 just like AI is now.

01:36:47 So you mentioned contact, like what’s contact?

01:36:51 So an airplane is an extremely complex system

01:36:54 or a spacecraft landing or whatever,

01:36:57 but at least it has the luxury

01:36:59 of things change relatively continuously.

01:37:03 That’s an oversimplification.

01:37:04 But if I make a small change

01:37:07 in the command I send to my actuator,

01:37:10 then the path that the robot will take

01:37:12 tends to change only by a small amount.

01:37:16 And there’s a feedback mechanism here.

01:37:18 That’s what we’re talking about.

01:37:19 And there’s a feedback mechanism.

01:37:20 And thinking about this as locally,

01:37:23 like a linear system, for instance,

01:37:25 I can use more linear algebra tools

01:37:29 to study systems like that,

01:37:31 generalizations of linear algebra to these smooth systems.

01:37:36 What is contact?

01:37:37 The robot has something very discontinuous

01:37:41 that happens when it makes or breaks,

01:37:43 when it starts touching the world.

01:37:45 And even the way it touches or the order of contacts

01:37:48 can change the outcome in potentially unpredictable ways.

01:37:53 Not unpredictable, but complex ways.

01:37:56 I do think there’s a little bit of,

01:38:01 a lot of people will say that contact is hard in robotics,

01:38:04 even to simulate.

01:38:06 And I think there’s a little bit of a,

01:38:08 there’s truth to that,

01:38:09 but maybe a misunderstanding around that.

01:38:13 So what is limiting is that when we think about our robots

01:38:19 and we write our simulators,

01:38:21 we often make an assumption that objects are rigid.

01:38:26 And when it comes down, that their mass moves all,

01:38:30 stays in a constant position relative to each other itself.

01:38:37 And that leads to some paradoxes

01:38:39 when you go to try to talk about

01:38:40 rigid body mechanics and contact.

01:38:43 And so for instance, if I have a three legged stool

01:38:48 with just imagine it comes to a point at the leg.

01:38:51 So it’s only touching the world at a point.

01:38:54 If I draw my physics,

01:38:56 my high school physics diagram of the system,

01:39:00 then there’s a couple of things

01:39:01 that I’m given by elementary physics.

01:39:03 I know if the system, if the table is at rest,

01:39:06 if it’s not moving, zero velocities,

01:39:09 that means that the normal force,

01:39:11 all the forces are in balance.

01:39:13 So the force of gravity is being countered

01:39:16 by the forces that the ground is pushing on my table legs.

01:39:21 I also know since it’s not rotating

01:39:23 that the moments have to balance.

01:39:25 And since it’s a three dimensional table,

01:39:29 it could fall in any direction.

01:39:31 It actually tells me uniquely

01:39:33 what those three normal forces have to be.

01:39:37 If I have four legs on my table,

01:39:39 four legged table and they were perfectly machined

01:39:43 to be exactly the right same height

01:39:45 and they’re set down and the table’s not moving,

01:39:48 then the basic conservation laws don’t tell me,

01:39:51 there are many solutions for the forces

01:39:54 that the ground could be putting on my legs

01:39:56 that would still result in the table not moving.

01:40:00 Now, the reason that seems fine, I could just pick one.

01:40:03 But it gets funny now because if you think about friction,

01:40:07 what we think about with friction is our standard model

01:40:11 says the amount of force that the table will push back

01:40:15 if I were to now try to push my table sideways,

01:40:18 I guess I have a table here,

01:40:20 is proportional to the normal force.

01:40:24 So if I’m barely touching and I push, I’ll slide,

01:40:27 but if I’m pushing more and I push, I’ll slide less.

01:40:30 It’s called coulomb friction is our standard model.

01:40:33 Now, if you don’t know what the normal force is

01:40:35 on the four legs and you push the table,

01:40:38 then you don’t know what the friction forces are gonna be.

01:40:43 And so you can’t actually tell,

01:40:45 the laws just aren’t explicit yet

01:40:47 about which way the table’s gonna go.

01:40:49 It could veer off to the left,

01:40:51 it could veer off to the right, it could go straight.

01:40:54 So the rigid body assumption of contact

01:40:58 leaves us with some paradoxes,

01:40:59 which are annoying for writing simulators

01:41:02 and for writing controllers.

01:41:04 We still do that sometimes because soft contact

01:41:07 is potentially harder numerically or whatever,

01:41:11 and the best simulators do both

01:41:12 or do some combination of the two.

01:41:15 But anyways, because of these kinds of paradoxes,

01:41:17 there’s all kinds of paradoxes in contact,

01:41:20 mostly due to these rigid body assumptions.

01:41:23 It becomes very hard to write the same kind of control laws

01:41:27 that we’ve been able to be successful with

01:41:29 for fighter jets.

01:41:32 Like fighter jets, we haven’t been as successful

01:41:34 writing those controllers for manipulation.

01:41:37 And so you don’t know what’s going to happen

01:41:39 at the point of contact, at the moment of contact.

01:41:41 There are situations absolutely

01:41:42 where our laws don’t tell us.

01:41:45 So the standard approach, that’s okay.

01:41:47 I mean, instead of having a differential equation,

01:41:51 you end up with a differential inclusion, it’s called.

01:41:53 It’s a set valued equation.

01:41:56 It says that I’m in this configuration,

01:41:58 I have these forces applied on me.

01:42:00 And there’s a set of things that could happen, right?

01:42:03 And you can…

01:42:04 And those aren’t continuous, I mean, what…

01:42:07 So when you’re saying like non smooth,

01:42:10 they’re not only not smooth, but this is discontinuous?

01:42:14 The non smooth comes in

01:42:15 when I make or break a new contact first,

01:42:18 or when I transition from stick to slip.

01:42:21 So you typically have static friction,

01:42:23 and then you’ll start sliding,

01:42:24 and that’ll be a discontinuous change in philosophy.

01:42:28 In philosophy, for instance,

01:42:31 especially if you come to rest or…

01:42:33 That’s so fascinating.

01:42:34 Okay, so what do you do?

01:42:37 Sorry, I interrupted you.

01:42:38 It’s fine.

01:42:41 What’s the hope under so much uncertainty

01:42:44 about what’s going to happen?

01:42:45 What are you supposed to do?

01:42:46 I mean, control has an answer for this.

01:42:48 Robust control is one approach,

01:42:50 but roughly you can write controllers

01:42:52 which try to still perform the right task

01:42:55 despite all the things that could possibly happen.

01:42:58 The world might want the table to go this way and this way,

01:43:00 but if I write a controller that pushes a little bit more

01:43:03 and pushes a little bit,

01:43:04 I can certainly make the table go in the direction I want.

01:43:08 It just puts a little bit more of a burden

01:43:10 on the control system, right?

01:43:12 And this discontinuities do change the control system

01:43:15 because the way we write it down right now,

01:43:21 every different control configuration,

01:43:24 including sticking or sliding

01:43:26 or parts of my body that are in contact or not,

01:43:29 looks like a different system.

01:43:30 And I think of them,

01:43:31 I reason about them separately or differently

01:43:34 and the combinatorics of that blow up, right?

01:43:38 So I just don’t have enough time to compute

01:43:41 all the possible contact configurations of my humanoid.

01:43:45 Interestingly, I mean, I’m a humanoid.

01:43:49 I have lots of degrees of freedom, lots of joints.

01:43:52 I’ve only been around for a handful of years.

01:43:54 It’s getting up there,

01:43:55 but I haven’t had time in my life

01:43:59 to visit all of the states in my system,

01:44:03 certainly all the contact configurations.

01:44:05 So if step one is to consider

01:44:08 every possible contact configuration that I’ll ever be in,

01:44:12 that’s probably not a problem I need to solve, right?

01:44:17 Just as a small tangent, what’s a contact configuration?

01:44:20 What like, just so we can enumerate

01:44:24 what are we talking about?

01:44:26 How many are there?

01:44:27 The simplest example maybe would be,

01:44:30 imagine a robot with a flat foot.

01:44:32 And we think about the phases of gait

01:44:35 where the heel strikes and then the front toe strikes,

01:44:40 and then you can heel up, toe off.

01:44:43 Those are each different contact configurations.

01:44:46 I only had two different contacts,

01:44:48 but I ended up with four different contact configurations.

01:44:51 Now, of course, my robot might actually have bumps on it

01:44:57 or other things,

01:44:58 so it could be much more subtle than that, right?

01:45:00 But it’s just even with one sort of box

01:45:03 interacting with the ground already in the plane

01:45:06 has that many, right?

01:45:07 And if I was just even a 3D foot,

01:45:09 then it probably my left toe might touch

01:45:11 just before my right toe and things get subtle.

01:45:14 Now, if I’m a dexterous hand

01:45:16 and I go to talk about just grabbing a water bottle,

01:45:22 if I have to enumerate every possible order

01:45:26 that my hand came into contact with the bottle,

01:45:31 then I’m dead in the water.

01:45:32 Any approach that we were able to get away with that

01:45:35 in walking because we mostly touched the ground

01:45:38 within a small number of points, for instance,

01:45:40 and we haven’t been able to get dexterous hands that way.

01:45:43 So you’ve mentioned that people think

01:45:50 that contact is really hard

01:45:52 and that that’s the reason that robotic manipulation

01:45:58 is problem is really hard.

01:46:00 Is there any flaws in that thinking?

01:46:06 So I think simulating contact is one aspect.

01:46:10 I know people often say that we don’t,

01:46:12 that one of the reasons that we have a limit in robotics

01:46:16 is because we do not simulate contact accurately

01:46:19 in our simulators.

01:46:20 And I think that is the extent to which that’s true

01:46:25 is partly because our simulators,

01:46:27 we haven’t got mature enough simulators.

01:46:31 There are some things that are still hard, difficult,

01:46:34 that we should change,

01:46:38 but we actually, we know what the governing equations are.

01:46:41 They have some foibles like this indeterminacy,

01:46:44 but we should be able to simulate them accurately.

01:46:48 We have incredible open source community in robotics,

01:46:51 but it actually just takes a professional engineering team

01:46:54 a lot of work to write a very good simulator like that.

01:46:59 Now, where does, I believe you’ve written, Drake.

01:47:03 There’s a team of people.

01:47:04 I certainly spent a lot of hours on it myself.

01:47:07 But what is Drake and what does it take to create

01:47:12 a simulation environment for the kind of difficult control

01:47:18 problems we’re talking about?

01:47:20 Right, so Drake is the simulator that I’ve been working on.

01:47:24 There are other good simulators out there.

01:47:26 I don’t like to think of Drake as just a simulator

01:47:29 because we write our controllers in Drake,

01:47:31 we write our perception systems a little bit in Drake,

01:47:34 but we write all of our low level control

01:47:37 and even planning and optimization.

01:47:40 So it has optimization capabilities as well?

01:47:42 Absolutely, yeah.

01:47:43 I mean, Drake is three things roughly.

01:47:46 It’s an optimization library, which is sits on,

01:47:49 it provides a layer of abstraction in C++ and Python

01:47:54 for commercial solvers.

01:47:55 You can write linear programs, quadratic programs,

01:48:00 semi definite programs, sums of squares programs,

01:48:03 the ones we’ve used, mixed integer programs,

01:48:05 and it will do the work to curate those

01:48:07 and send them to whatever the right solver is for instance,

01:48:10 and it provides a level of abstraction.

01:48:13 The second thing is a system modeling language,

01:48:18 a bit like LabVIEW or Simulink,

01:48:20 where you can make block diagrams out of complex systems,

01:48:24 or it’s like ROS in that sense,

01:48:26 where you might have lots of ROS nodes

01:48:29 that are each doing some part of your system,

01:48:31 but to contrast it with ROS, we try to write,

01:48:36 if you write a Drake system, then you have to,

01:48:40 it asks you to describe a little bit more about the system.

01:48:43 If you have any state, for instance, in the system,

01:48:46 any variables that are gonna persist,

01:48:47 you have to declare them.

01:48:49 Parameters can be declared and the like,

01:48:51 but the advantage of doing that is that you can,

01:48:54 if you like, run things all on one process,

01:48:57 but you can also do control design against it.

01:49:00 You can do, I mean, simple things like rewinding

01:49:03 and playing back your simulations, for instance,

01:49:07 these things, you get some rewards

01:49:09 for spending a little bit more upfront cost

01:49:11 in describing each system.

01:49:13 And I was inspired to do that

01:49:16 because I think the complexity of Atlas, for instance,

01:49:21 is just so great.

01:49:22 And I think, although, I mean,

01:49:24 ROS has been an incredible, absolutely huge fan

01:49:27 of what it’s done for the robotics community,

01:49:30 but the ability to rapidly put different pieces together

01:49:35 and have a functioning thing is very good.

01:49:38 But I do think that it’s hard to think clearly

01:49:42 about a bag of disparate parts,

01:49:45 Mr. Potato Head kind of software stack.

01:49:48 And if you can ask a little bit more

01:49:53 out of each of those parts,

01:49:54 then you can understand the way they work better.

01:49:56 You can try to verify them and the like,

01:50:00 or you can do learning against them.

01:50:02 And then one of those systems, the last thing,

01:50:04 I said the first two things that Drake is,

01:50:06 but the last thing is that there is a set

01:50:09 of multi body equations, rigid body equations,

01:50:12 that is trying to provide a system that simulates physics.

01:50:16 And we also have renderers and other things,

01:50:20 but I think the physics component of Drake is special

01:50:23 in the sense that we have done excessive amount

01:50:27 of engineering to make sure

01:50:29 that we’ve written the equations correctly.

01:50:31 Every possible tumbling satellite or spinning top

01:50:34 or anything that we could possibly write as a test is tested.

01:50:38 We are making some, I think, fundamental improvements

01:50:42 on the way you simulate contact.

01:50:44 Just what does it take to simulate contact?

01:50:47 I mean, it just seems,

01:50:50 I mean, there’s something just beautiful

01:50:52 to the way you were like explaining contact

01:50:55 and you were like tapping your fingers

01:50:56 on the table while you’re doing it, just.

01:51:00 Easily, right?

01:51:01 Easily, just like, just not even like,

01:51:04 it was like helping you think, I guess.

01:51:10 So you have this like awesome demo

01:51:12 of loading or unloading a dishwasher,

01:51:16 just picking up a plate,

01:51:18 or grasping it like for the first time.

01:51:26 That’s just seems like so difficult.

01:51:29 What, how do you simulate any of that?

01:51:33 So it was really interesting that what happened was

01:51:35 that we started getting more professional

01:51:39 about our software development

01:51:40 during the DARPA Robotics Challenge.

01:51:43 I learned the value of software engineering

01:51:46 and how these, how to bridle complexity.

01:51:48 I guess that’s what I want to somehow fight against

01:51:52 and bring some of the clear thinking of controls

01:51:54 into these complex systems we’re building for robots.

01:52:00 Shortly after the DARPA Robotics Challenge,

01:52:02 Toyota opened a research institute,

01:52:04 TRI, Toyota Research Institute.

01:52:08 They put one of their, there’s three locations.

01:52:10 One of them is just down the street from MIT.

01:52:13 And I helped ramp that up right up

01:52:17 as a part of my, the end of my sabbatical, I guess.

01:52:23 So TRI has given me, the TRI robotics effort

01:52:29 has made this investment in simulation in Drake.

01:52:32 And Michael Sherman leads a team there

01:52:34 of just absolutely top notch dynamics experts

01:52:37 that are trying to write those simulators

01:52:40 that can pick up the dishes.

01:52:41 And there’s also a team working on manipulation there

01:52:44 that is taking problems like loading the dishwasher.

01:52:48 And we’re using that to study these really hard corner cases

01:52:53 kind of problems in manipulation.

01:52:55 So for me, this, you know, simulating the dishes,

01:52:59 we could actually write a controller.

01:53:01 If we just cared about picking up dishes in the sink once,

01:53:05 we could write a controller

01:53:05 without any simulation whatsoever,

01:53:07 and we could call it done.

01:53:10 But we want to understand like,

01:53:12 what is the path you take to actually get to a robot

01:53:17 that could perform that for any dish in anybody’s kitchen

01:53:22 with enough confidence

01:53:23 that it could be a commercial product, right?

01:53:26 And it has deep learning perception in the loop.

01:53:29 It has complex dynamics in the loop.

01:53:31 It has controller, it has a planner.

01:53:33 And how do you take all of that complexity

01:53:36 and put it through this engineering discipline

01:53:39 and verification and validation process

01:53:42 to actually get enough confidence to deploy?

01:53:46 I mean, the DARPA challenge made me realize

01:53:49 that that’s not something you throw over the fence

01:53:52 and hope that somebody will harden it for you,

01:53:54 that there are really fundamental challenges

01:53:57 in closing that last gap.

01:53:59 They’re doing the validation and the testing.

01:54:03 I think it might even change the way we have to think about

01:54:06 the way we write systems.

01:54:09 What happens if you have the robot running lots of tests

01:54:15 and it screws up, it breaks a dish, right?

01:54:19 How do you capture that?

01:54:19 I said, you can’t run the same simulation

01:54:23 or the same experiment twice on a real robot.

01:54:27 Do we have to be able to bring that one off failure

01:54:31 back into simulation

01:54:32 in order to change our controllers, study it,

01:54:35 make sure it won’t happen again?

01:54:37 Do we, is it enough to just try to add that

01:54:40 to our distribution and understand that on average,

01:54:43 we’re gonna cover that situation again?

01:54:45 There’s like really subtle questions at the corner cases

01:54:49 that I think we don’t yet have satisfying answers for.

01:54:53 Like how do you find the corner cases?

01:54:55 That’s one kind of, is there,

01:54:57 do you think that’s possible to create a systematized way

01:55:01 of discovering corner cases efficiently?

01:55:04 Yes.

01:55:05 In whatever the problem is?

01:55:07 Yes, I mean, I think we have to get better at that.

01:55:10 I mean, control theory has for decades

01:55:14 talked about active experiment design.

01:55:17 What’s that?

01:55:19 So people call it curiosity these days.

01:55:22 It’s roughly this idea of trying to exploration

01:55:24 or exploitation, but in the active experiment design

01:55:27 is even, is more specific.

01:55:29 You could try to understand the uncertainty in your system,

01:55:34 design the experiment that will provide

01:55:36 the maximum information to reduce that uncertainty.

01:55:40 If there’s a parameter you wanna learn about,

01:55:42 what is the optimal trajectory I could execute

01:55:45 to learn about that parameter, for instance.

01:55:49 Scaling that up to something that has a deep network

01:55:51 in the loop and a planning in the loop is tough.

01:55:55 We’ve done some work on, you know,

01:55:58 with Matt Okely and Aman Sinha,

01:56:00 we’ve worked on some falsification algorithms

01:56:03 that are trying to do rare event simulation

01:56:05 that try to just hammer on your simulator.

01:56:08 And if your simulator is good enough,

01:56:10 you can spend a lot of time,

01:56:13 or you can write good algorithms

01:56:15 that try to spend most of their time in the corner cases.

01:56:19 So you basically imagine you’re building an autonomous car

01:56:25 and you wanna put it in, I don’t know,

01:56:27 downtown New Delhi all the time, right?

01:56:29 And accelerated testing.

01:56:31 If you can write sampling strategies,

01:56:33 which figure out where your controller’s

01:56:35 performing badly in simulation

01:56:37 and start generating lots of examples around that.

01:56:40 You know, it’s just the space of possible places

01:56:44 where that can be, where things can go wrong is very big.

01:56:48 So it’s hard to write those algorithms.

01:56:49 Yeah, rare event simulation

01:56:51 is just a really compelling notion, if it’s possible.

01:56:55 We joked and we call it the black swan generator.

01:56:58 It’s a black swan.

01:57:00 Because you don’t just want the rare events,

01:57:01 you want the ones that are highly impactful.

01:57:04 I mean, that’s the most,

01:57:06 those are the most sort of profound questions

01:57:08 we ask of our world.

01:57:10 Like, what’s the worst that can happen?

01:57:16 But what we’re really asking

01:57:18 isn’t some kind of like computer science,

01:57:20 worst case analysis.

01:57:22 We’re asking like, what are the millions of ways

01:57:25 this can go wrong?

01:57:27 And that’s like our curiosity.

01:57:29 And we humans, I think are pretty bad at,

01:57:34 we just like run into it.

01:57:36 And I think there’s a distributed sense

01:57:38 because there’s now like 7.5 billion of us.

01:57:41 And so there’s a lot of them.

01:57:42 And then a lot of them write blog posts

01:57:45 about the stupid thing they’ve done.

01:57:46 So we learn in a distributed way.

01:57:49 There’s some.

01:57:50 I think that’s gonna be important for robots too.

01:57:53 I mean, that’s another massive theme

01:57:55 at Toyota Research for Robotics

01:57:58 is this fleet learning concept

01:58:00 is the idea that I, as a human,

01:58:04 I don’t have enough time to visit all of my states, right?

01:58:07 There’s just a, it’s very hard for one robot

01:58:10 to experience all the things.

01:58:12 But that’s not actually the problem we have to solve, right?

01:58:16 We’re gonna have fleets of robots

01:58:17 that can have very similar appendages.

01:58:20 And at some point, maybe collectively,

01:58:24 they have enough data

01:58:26 that their computational processes

01:58:29 should be set up differently than ours, right?

01:58:31 It’s this vision of just,

01:58:34 I mean, all these dishwasher unloading robots.

01:58:38 I mean, that robot dropping a plate

01:58:42 and a human looking at the robot probably pissed off.

01:58:46 Yeah.

01:58:47 But that’s a special moment to record.

01:58:51 I think one thing in terms of fleet learning,

01:58:54 and I’ve seen that because I’ve talked to a lot of folks,

01:58:57 just like Tesla users or Tesla drivers,

01:59:01 they’re another company

01:59:02 that’s using this kind of fleet learning idea.

01:59:05 One hopeful thing I have about humans

01:59:08 is they really enjoy when a system improves, learns.

01:59:13 So they enjoy fleet learning.

01:59:14 And the reason it’s hopeful for me

01:59:17 is they’re willing to put up with something

01:59:20 that’s kind of dumb right now.

01:59:22 And they’re like, if it’s improving,

01:59:25 they almost like enjoy being part of the, like teaching it.

01:59:29 Almost like if you have kids,

01:59:30 like you’re teaching them something, right?

01:59:33 I think that’s a beautiful thing

01:59:35 because that gives me hope

01:59:36 that we can put dumb robots out there.

01:59:40 I mean, the problem on the Tesla side with cars,

01:59:43 cars can kill you.

01:59:45 That makes the problem so much harder.

01:59:47 Dishwasher unloading is a little safe.

01:59:50 That’s why home robotics is really exciting.

01:59:54 And just to clarify, I mean, for people who might not know,

01:59:57 I mean, TRI, Toyota Research Institute.

02:00:00 So they’re, I mean, they’re pretty well known

02:00:03 for like autonomous vehicle research,

02:00:06 but they’re also interested in home robotics.

02:00:10 Yep, there’s a big group working on,

02:00:12 multiple groups working on home robotics.

02:00:14 It’s a major part of the portfolio.

02:00:17 There’s also a couple other projects

02:00:19 in advanced materials discovery,

02:00:21 using AI and machine learning to discover new materials

02:00:24 for car batteries and the like, for instance, yeah.

02:00:28 And that’s been actually an incredibly successful team.

02:00:31 There’s new projects starting up too, so.

02:00:33 Do you see a future of where like robots are in our home

02:00:38 and like robots that have like actuators

02:00:44 that look like arms in our home

02:00:46 or like, you know, more like humanoid type robots?

02:00:49 Or is this, are we gonna do the same thing

02:00:51 that you just mentioned that, you know,

02:00:53 the dishwasher is no longer a robot.

02:00:55 We’re going to just not even see them as robots.

02:00:58 But I mean, what’s your vision of the home of the future

02:01:02 10, 20 years from now, 50 years, if you get crazy?

02:01:06 Yeah, I think we already have Roombas cruising around.

02:01:10 We have, you know, Alexis or Google Homes

02:01:13 on our kitchen counter.

02:01:16 It’s only a matter of time until they spring arms

02:01:18 and start doing something useful like that.

02:01:21 So I do think it’s coming.

02:01:23 I think lots of people have lots of motivations

02:01:27 for doing it.

02:01:29 It’s been super interesting actually learning

02:01:31 about Toyota’s vision for it,

02:01:33 which is about helping people age in place.

02:01:38 Cause I think that’s not necessarily the first entry,

02:01:41 the most lucrative entry point,

02:01:44 but it’s the problem maybe that we really need to solve

02:01:48 no matter what.

02:01:50 And so I think there’s a real opportunity.

02:01:53 It’s a delicate problem.

02:01:55 How do you work with people, help people,

02:01:59 keep them active, engaged, you know,

02:02:03 but improve their quality of life

02:02:05 and help them age in place, for instance.

02:02:08 It’s interesting because older folks are also,

02:02:12 I mean, there’s a contrast there

02:02:13 because they’re not always the folks

02:02:18 who are the most comfortable with technology, for example.

02:02:20 So there’s a division that’s interesting.

02:02:24 You can do so much good with a robot for older folks,

02:02:32 but there’s a gap to fill of understanding.

02:02:36 I mean, it’s actually kind of beautiful.

02:02:39 Robot is learning about the human

02:02:41 and the human is kind of learning about this new robot thing.

02:02:44 And it’s also with, at least with,

02:02:49 like when I talked to my parents about robots,

02:02:51 there’s a little bit of a blank slate there too.

02:02:54 Like you can, I mean, they don’t know anything

02:02:58 about robotics, so it’s completely like wide open.

02:03:02 They don’t have, they haven’t,

02:03:03 my parents haven’t seen Black Mirror.

02:03:06 So like they, it’s a blank slate.

02:03:09 Here’s a cool thing, like what can it do for me?

02:03:11 Yeah, so it’s an exciting space.

02:03:14 I think it’s a really important space.

02:03:16 I do feel like a few years ago,

02:03:20 drones were successful enough in academia.

02:03:22 They kind of broke out and started an industry

02:03:25 and autonomous cars have been happening.

02:03:29 It does feel like manipulation in logistics, of course,

02:03:32 first, but in the home shortly after,

02:03:35 seems like one of the next big things

02:03:37 that’s gonna really pop.

02:03:40 So I don’t think we talked about it,

02:03:42 but what’s soft robotics?

02:03:44 So we talked about like rigid bodies.

02:03:49 Like if we can just linger on this whole touch thing.

02:03:52 Yeah, so what’s soft robotics?

02:03:54 So I told you that I really dislike the fact

02:04:00 that robots are afraid of touching the world

02:04:03 all over their body.

02:04:04 So there’s a couple reasons for that.

02:04:06 If you look carefully at all the places

02:04:08 that robots actually do touch the world,

02:04:11 they’re almost always soft.

02:04:12 They have some sort of pad on their fingers

02:04:14 or a rubber sole on their foot.

02:04:17 But if you look up and down the arm,

02:04:19 we’re just pure aluminum or something.

02:04:25 So that makes it hard actually.

02:04:26 In fact, hitting the table with your rigid arm

02:04:30 or nearly rigid arm has some of the problems

02:04:34 that we talked about in terms of simulation.

02:04:37 I think it fundamentally changes the mechanics of contact

02:04:39 when you’re soft, right?

02:04:41 You turn point contacts into patch contacts,

02:04:45 which can have torsional friction.

02:04:47 You can have distributed load.

02:04:49 If I wanna pick up an egg, right?

02:04:52 If I pick it up with two points,

02:04:54 then in order to put enough force

02:04:56 to sustain the weight of the egg,

02:04:57 I might have to put a lot of force to break the egg.

02:04:59 If I envelop it with contact all around,

02:05:04 then I can distribute my force across the shell of the egg

02:05:07 and have a better chance of not breaking it.

02:05:10 So soft robotics is for me a lot about changing

02:05:12 the mechanics of contact.

02:05:15 Does it make the problem a lot harder?

02:05:19 Quite the opposite.

02:05:24 It changes the computational problem.

02:05:26 I think because of the, I think our world

02:05:30 and our mathematics has biased us towards rigid.

02:05:34 I see.

02:05:35 But it really should make things better in some ways, right?

02:05:40 I think the future is unwritten there.

02:05:44 But the other thing it can do.

02:05:45 I think ultimately, sorry to interrupt,

02:05:46 but I think ultimately it will make things simpler

02:05:49 if we embrace the softness of the world.

02:05:51 It makes things smoother, right?

02:05:55 So the result of small actions is less discontinuous,

02:06:00 but it also means potentially less instantaneously bad.

02:06:05 For instance, I won’t necessarily contact something

02:06:09 and send it flying off.

02:06:12 The other aspect of it

02:06:13 that just happens to dovetail really well

02:06:14 is that soft robotics tends to be a place

02:06:17 where we can embed a lot of sensors too.

02:06:19 So if you change your hardware and make it more soft,

02:06:23 then you can potentially have a tactile sensor,

02:06:25 which is measuring the deformation.

02:06:27 So there’s a team at TRI that’s working on soft hands

02:06:32 and you get so much more information.

02:06:35 You can put a camera behind the skin roughly

02:06:38 and get fantastic tactile information,

02:06:42 which is, it’s super important.

02:06:46 Like in manipulation,

02:06:47 one of the things that really is frustrating

02:06:49 is if you work super hard on your head mounted,

02:06:52 on your perception system for your head mounted cameras,

02:06:54 and then you get a lot of information

02:06:56 for your head mounted cameras,

02:06:57 and then you’ve identified an object,

02:06:59 you reach down to touch it,

02:07:00 and the last thing that happens,

02:07:01 right before the most important time,

02:07:03 you stick your hand

02:07:04 and you’re occluding your head mounted sensors.

02:07:07 So in all the part that really matters,

02:07:10 all of your off board sensors are occluded.

02:07:13 And really, if you don’t have tactile information,

02:07:15 then you’re blind in an important way.

02:07:19 So it happens that soft robotics and tactile sensing

02:07:23 tend to go hand in hand.

02:07:25 I think we’ve kind of talked about it,

02:07:26 but you taught a course on underactuated robotics.

02:07:31 I believe that was the name of it, actually.

02:07:32 That’s right.

02:07:34 Can you talk about it in that context?

02:07:37 What is underactuated robotics?

02:07:40 Right, so underactuated robotics is my graduate course.

02:07:43 It’s online mostly now,

02:07:46 in the sense that the lectures.

02:07:47 Several versions of it, I think.

02:07:49 Right, the YouTube.

02:07:49 It’s really great, I recommend it highly.

02:07:52 Look on YouTube for the 2020 versions.

02:07:55 Until March, and then you have to go back to 2019,

02:07:57 thanks to COVID.

02:08:00 No, I’ve poured my heart into that class.

02:08:04 And lecture one is basically explaining

02:08:06 what the word underactuated means.

02:08:07 So people are very kind to show up

02:08:09 and then maybe have to learn

02:08:12 what the title of the course means

02:08:13 over the course of the first lecture.

02:08:15 That first lecture is really good.

02:08:17 You should watch it.

02:08:18 Thanks.

02:08:19 It’s a strange name,

02:08:21 but I thought it captured the essence

02:08:25 of what control was good at doing

02:08:27 and what control was bad at doing.

02:08:29 So what do I mean by underactuated?

02:08:31 So a mechanical system

02:08:36 has many degrees of freedom, for instance.

02:08:39 I think of a joint as a degree of freedom.

02:08:41 And it has some number of actuators, motors.

02:08:46 So if you have a robot that’s bolted to the table

02:08:49 that has five degrees of freedom and five motors,

02:08:54 then you have a fully actuated robot.

02:08:57 If you take away one of those motors,

02:09:00 then you have an underactuated robot.

02:09:03 Now, why on earth?

02:09:04 I have a good friend who likes to tease me.

02:09:07 He said, Ross, if you had more research funding,

02:09:09 would you work on fully actuated robots?

02:09:11 Yeah.

02:09:12 And the answer is no.

02:09:15 The world gives us underactuated robots,

02:09:17 whether we like it or not.

02:09:18 I’m a human.

02:09:19 I’m an underactuated robot,

02:09:21 even though I have more muscles

02:09:23 than my big degrees of freedom,

02:09:25 because I have in some places

02:09:27 multiple muscles attached to the same joint.

02:09:30 But still, there’s a really important degree of freedom

02:09:33 that I have, which is the location of my center of mass

02:09:37 in space, for instance.

02:09:39 All right, I can jump into the air,

02:09:42 and there’s no motor that connects my center of mass

02:09:45 to the ground in that case.

02:09:47 So I have to think about the implications

02:09:49 of not having control over everything.

02:09:52 The passive dynamic walkers are the extreme view of that,

02:09:56 where you’ve taken away all the motors,

02:09:57 and you have to let physics do the work.

02:09:59 But it shows up in all of the walking robots,

02:10:02 where you have to use some of the actuators

02:10:04 to push and pull even the degrees of freedom

02:10:06 that you don’t have an actuator on.

02:10:09 That’s referring to walking if you’re falling forward.

02:10:13 Is there a way to walk that’s fully actuated?

02:10:16 So it’s a subtle point.

02:10:18 When you’re in contact and you have your feet on the ground,

02:10:23 there are still limits to what you can do, right?

02:10:26 Unless I have suction cups on my feet,

02:10:29 I cannot accelerate my center of mass towards the ground

02:10:32 faster than gravity,

02:10:33 because I can’t get a force pushing me down, right?

02:10:37 But I can still do most of the things that I want to.

02:10:39 So you can get away with basically thinking of the system

02:10:42 as fully actuated,

02:10:43 unless you suddenly needed to accelerate down super fast.

02:10:47 But as soon as I take a step,

02:10:49 I get into the more nuanced territory,

02:10:52 and to get to really dynamic robots,

02:10:55 or airplanes or other things,

02:10:59 I think you have to embrace the underactuated dynamics.

02:11:02 Manipulation, people think, is manipulation underactuated?

02:11:06 Even if my arm is fully actuated, I have a motor,

02:11:10 if my goal is to control the position and orientation

02:11:14 of this cup, then I don’t have an actuator

02:11:18 for that directly.

02:11:19 So I have to use my actuators over here

02:11:21 to control this thing.

02:11:23 Now it gets even worse,

02:11:24 like what if I have to button my shirt, okay?

02:11:29 What are the degrees of freedom of my shirt, right?

02:11:31 I suddenly, that’s a hard question to think about.

02:11:34 It kind of makes me queasy

02:11:36 thinking about my state space control ideas.

02:11:40 But actually those are the problems

02:11:41 that make me so excited about manipulation right now,

02:11:44 is that it breaks some of the,

02:11:48 it breaks a lot of the foundational control stuff

02:11:50 that I’ve been thinking about.

02:11:51 Is there, what are some interesting insights

02:11:54 you could say about trying to solve an underactuated,

02:11:58 a control in an underactuated system?

02:12:02 So I think the philosophy there

02:12:04 is let physics do more of the work.

02:12:08 The technical approach has been optimization.

02:12:12 So you typically formulate your decision making

02:12:14 for control as an optimization problem.

02:12:17 And you use the language of optimal control

02:12:19 and sometimes often numerical optimal control

02:12:22 in order to make those decisions and balance,

02:12:26 these complicated equations of,

02:12:29 and in order to control,

02:12:30 you don’t have to use optimal control

02:12:33 to do underactuated systems,

02:12:34 but that has been the technical approach

02:12:36 that has borne the most fruit in our,

02:12:39 at least in our line of work.

02:12:40 And there’s some, so in underactuated systems,

02:12:44 when you say let physics do some of the work,

02:12:46 so there’s a kind of feedback loop

02:12:50 that observes the state that the physics brought you to.

02:12:54 So like you’ve, there’s a perception there,

02:12:57 there’s a feedback somehow.

02:13:00 Do you ever loop in like complicated perception systems

02:13:05 into this whole picture?

02:13:06 Right, right around the time of the DARPA challenge,

02:13:09 we had a complicated perception system

02:13:11 in the DARPA challenge.

02:13:12 We also started to embrace perception

02:13:15 for our flying vehicles at the time.

02:13:17 We had a really good project

02:13:20 on trying to make airplanes fly

02:13:21 at high speeds through forests.

02:13:24 Sirtash Karaman was on that project

02:13:27 and we had, it was a really fun team to work on.

02:13:30 He’s carried it farther, much farther forward since then.

02:13:34 And that’s using cameras for perception?

02:13:35 So that was using cameras.

02:13:37 That was, at the time we felt like LIDAR

02:13:40 was too heavy and too power heavy

02:13:44 to be carried on a light UAV,

02:13:47 and we were using cameras.

02:13:49 And that was a big part of it was just

02:13:50 how do you do even stereo matching

02:13:53 at a fast enough rate with a small camera,

02:13:56 small onboard compute.

02:13:58 Since then we have now,

02:14:00 so the deep learning revolution

02:14:02 unquestionably changed what we can do

02:14:05 with perception for robotics and control.

02:14:09 So in manipulation, we can address,

02:14:11 we can use perception in I think a much deeper way.

02:14:14 And we get into not only,

02:14:17 I think the first use of it naturally

02:14:19 would be to ask your deep learning system

02:14:22 to look at the cameras and produce the state,

02:14:25 which is like the pose of my thing, for instance.

02:14:28 But I think we’ve quickly found out

02:14:30 that that’s not always the right thing to do.

02:14:34 Why is that?

02:14:35 Because what’s the state of my shirt?

02:14:38 Imagine, I’ve always,

02:14:39 Very noisy, you mean, or?

02:14:41 It’s, if the first step of me trying to button my shirt

02:14:46 is estimate the full state of my shirt,

02:14:48 including like what’s happening in the back here,

02:14:50 whatever, whatever.

02:14:51 That’s just not the right specification.

02:14:55 There are aspects of the state

02:14:57 that are very important to the task.

02:15:00 There are many that are unobservable

02:15:03 and not important to the task.

02:15:05 So you really need,

02:15:06 it begs new questions about state representation.

02:15:11 Another example that we’ve been playing with in lab

02:15:13 has been just the idea of chopping onions, okay?

02:15:17 Or carrots, turns out to be better.

02:15:20 So onions stink up the lab.

02:15:22 And they’re hard to see in a camera.

02:15:26 But so,

02:15:27 Details matter, yeah.

02:15:28 Details matter, you know?

02:15:30 So if I’m moving around a particular object, right?

02:15:35 Then I think about,

02:15:36 oh, it’s got a position or an orientation in space.

02:15:38 That’s the description I want.

02:15:39 Now, when I’m chopping an onion, okay?

02:15:42 Like the first chop comes down.

02:15:44 I have now a hundred pieces of onion.

02:15:48 Does my control system really need to understand

02:15:50 the position and orientation and even the shape

02:15:52 of the hundred pieces of onion in order to make a decision?

02:15:56 Probably not, you know?

02:15:56 And if I keep going, I’m just getting,

02:15:58 more and more is my state space getting bigger as I cut?

02:16:04 It’s not right.

02:16:06 So somehow there’s a,

02:16:08 I think there’s a richer idea of state.

02:16:13 It’s not the state that is given to us

02:16:15 by Lagrangian mechanics.

02:16:17 There is a proper Lagrangian state of the system,

02:16:21 but the relevant state for this is some latent state

02:16:26 is what we call it in machine learning.

02:16:28 But, you know, there’s some different state representation.

02:16:32 Some compressed representation, some.

02:16:35 And that’s what I worry about saying compressed

02:16:37 because it doesn’t,

02:16:38 I don’t mind that it’s low dimensional or not,

02:16:43 but it has to be something that’s easier to think about.

02:16:46 By us humans.

02:16:48 Or my algorithms.

02:16:49 Or the algorithms being like control, optimal.

02:16:53 So for instance, if the contact mechanics

02:16:56 of all of those onion pieces and all the permutations

02:16:59 of possible touches between those onion pieces,

02:17:02 you know, you can give me

02:17:03 a high dimensional state representation,

02:17:05 I’m okay if it’s linear.

02:17:06 But if I have to think about all the possible

02:17:08 shattering combinatorics of that,

02:17:11 then my robot’s gonna sit there thinking

02:17:13 and the soup’s gonna get cold or something.

02:17:17 So since you taught the course,

02:17:20 it kind of entered my mind,

02:17:22 the idea of underactuated as really compelling

02:17:25 to see the world in this kind of way.

02:17:29 Do you ever, you know, if we talk about onions

02:17:32 or you talk about the world with people in it in general,

02:17:35 do you see the world as basically an underactuated system?

02:17:39 Do you like often look at the world in this way?

02:17:42 Or is this overreach?

02:17:47 Underactuated is a way of life, man.

02:17:49 Exactly, I guess that’s what I’m asking.

02:17:53 I do think it’s everywhere.

02:17:54 I think in some places,

02:17:58 we already have natural tools to deal with it.

02:18:01 You know, it rears its head.

02:18:02 I mean, in linear systems, it’s not a problem.

02:18:04 We just, like an underactuated linear system

02:18:07 is really not sufficiently distinct

02:18:09 from a fully actuated linear system.

02:18:10 It’s a subtle point about when that becomes a bottleneck

02:18:15 in what we know how to do with control.

02:18:17 It happens to be a bottleneck,

02:18:19 although we’ve gotten incredibly good solutions now,

02:18:22 but for a long time that I felt

02:18:24 that that was the key bottleneck in legged robots.

02:18:27 And roughly now the underactuated course

02:18:29 is me trying to tell people everything I can

02:18:33 about how to make Atlas do a backflip, right?

02:18:38 I have a second course now

02:18:39 that I teach in the other semesters,

02:18:41 which is on manipulation.

02:18:43 And that’s where we get into now more of the,

02:18:45 that’s a newer class.

02:18:47 I’m hoping to put it online this fall completely.

02:18:51 And that’s gonna have much more aspects

02:18:53 about these perception problems

02:18:55 and the state representation questions,

02:18:57 and then how do you do control.

02:18:59 And the thing that’s a little bit sad is that,

02:19:04 for me at least, is there’s a lot of manipulation tasks

02:19:07 that people wanna do and should wanna do.

02:19:09 They could start a company with it and be very successful

02:19:12 that don’t actually require you to think that much

02:19:15 about underact, or dynamics at all even,

02:19:18 but certainly underactuated dynamics.

02:19:20 Once I have, if I reach out and grab something,

02:19:23 if I can sort of assume it’s rigidly attached to my hand,

02:19:25 then I can do a lot of interesting,

02:19:26 meaningful things with it

02:19:28 without really ever thinking about the dynamics

02:19:30 of that object.

02:19:32 So we’ve built systems that kind of reduce the need for that.

02:19:37 Enveloping grasps and the like.

02:19:40 But I think the really good problems in manipulation.

02:19:43 So manipulation, by the way, is more than just pick and place.

02:19:48 That’s like a lot of people think of that, just grasping.

02:19:51 I don’t mean that.

02:19:52 I mean buttoning my shirt, I mean tying shoelaces.

02:19:56 How do you program a robot to tie shoelaces?

02:19:59 And not just one shoe, but every shoe, right?

02:20:02 That’s a really good problem.

02:20:05 It’s tempting to write down like the infinite dimensional

02:20:08 state of the laces, that’s probably not needed

02:20:13 to write a good controller.

02:20:15 I know we could hand design a controller that would do it,

02:20:18 but I don’t want that.

02:20:19 I want to understand the principles that would allow me

02:20:22 to solve another problem that’s kind of like that.

02:20:25 But I think if we can stay pure in our approach,

02:20:29 then the challenge of tying anybody’s shoes

02:20:33 is a great challenge.

02:20:36 That’s a great challenge.

02:20:37 I mean, and the soft touch comes into play there.

02:20:40 That’s really interesting.

02:20:43 Let me ask another ridiculous question on this topic.

02:20:47 How important is touch?

02:20:49 We haven’t talked much about humans,

02:20:52 but I have this argument with my dad

02:20:56 where like I think you can fall in love with a robot

02:20:59 based on language alone.

02:21:02 And he believes that touch is essential.

02:21:06 Touch and smell, he says.

02:21:07 But so in terms of robots, connecting with humans,

02:21:17 we can go philosophical in terms of like a deep,

02:21:19 meaningful connection, like love,

02:21:21 but even just like collaborating in an interesting way,

02:21:25 how important is touch like from an engineering perspective

02:21:30 and a philosophical one?

02:21:32 I think it’s super important.

02:21:35 Even just in a practical sense,

02:21:37 if we forget about the emotional part of it.

02:21:40 But for robots to interact safely

02:21:43 while they’re doing meaningful mechanical work

02:21:47 in the close contact with or vicinity of people

02:21:52 that need help, I think we have to have them,

02:21:55 we have to build them differently.

02:21:57 They have to be afraid, not afraid of touching the world.

02:21:59 So I think Baymax is just awesome.

02:22:02 That’s just like the movie of Big Hero 6

02:22:06 and the concept of Baymax, that’s just awesome.

02:22:08 I think we should, and we have some folks at Toyota

02:22:13 that are trying to, Toyota Research

02:22:14 that are trying to build Baymax roughly.

02:22:16 And I think it’s just a fantastically good project.

02:22:21 I think it will change the way people physically interact.

02:22:25 The same way, I mean, you gave a couple examples earlier,

02:22:27 but if the robot that was walking around my home

02:22:31 looked more like a teddy bear

02:22:33 and a little less like the Terminator,

02:22:35 that could change completely the way people perceive it

02:22:38 and interact with it.

02:22:39 And maybe they’ll even wanna teach it, like you said, right?

02:22:44 You could not quite gamify it,

02:22:47 but somehow instead of people judging it

02:22:50 and looking at it as if it’s not doing as well as a human,

02:22:54 they’re gonna try to help out the cute teddy bear, right?

02:22:57 Who knows, but I think we’re building robots wrong

02:23:01 and being more soft and more contact is important, right?

02:23:07 Yeah, I mean, like all the magical moments

02:23:09 I can remember with robots,

02:23:12 well, first of all, just visiting your lab and seeing Atlas,

02:23:16 but also Spotmini, when I first saw Spotmini in person

02:23:21 and hung out with him, her, it,

02:23:26 I don’t have trouble engendering robots.

02:23:28 I feel the robotics people really say, oh, is it it?

02:23:31 I kinda like the idea that it’s a her or a him.

02:23:35 There’s a magical moment, but there’s no touching.

02:23:38 I guess the question I have, have you ever been,

02:23:41 like, have you had a human robot experience

02:23:44 where a robot touched you?

02:23:49 And like, it was like, wait,

02:23:51 like, was there a moment that you’ve forgotten

02:23:53 that a robot is a robot and like,

02:23:57 the anthropomorphization stepped in

02:24:00 and for a second you forgot that it’s not human?

02:24:04 I mean, I think when you’re in on the details,

02:24:07 then we, of course, anthropomorphized our work with Atlas,

02:24:12 but in verbal communication and the like,

02:24:17 I think we were pretty aware of it

02:24:18 as a machine that needed to be respected.

02:24:21 And I actually, I worry more about the smaller robots

02:24:26 that could still move quickly if programmed wrong

02:24:29 and we have to be careful actually

02:24:31 about safety and the like right now.

02:24:33 And that, if we build our robots correctly,

02:24:36 I think then those, a lot of those concerns could go away.

02:24:40 And we’re seeing that trend.

02:24:41 We’re seeing the lower cost, lighter weight arms now

02:24:44 that could be fundamentally safe.

02:24:46 I mean, I do think touch is so fundamental.

02:24:49 Ted Adelson is great.

02:24:51 He’s a perceptual scientist at MIT

02:24:55 and he studied vision most of his life.

02:24:58 And he said, when I had kids,

02:25:01 I expected to be fascinated by their perceptual development.

02:25:06 But what really, what he noticed was,

02:25:09 felt more impressive, more dominant

02:25:10 was the way that they would touch everything

02:25:13 and lick everything.

02:25:13 And pick things up, stick it on their tongue and whatever.

02:25:16 And he said, watching his daughter convinced him

02:25:22 that actually he needed to study tactile sensing more.

02:25:25 So there’s something very important.

02:25:30 I think it’s a little bit also of the passive

02:25:32 versus active part of the world, right?

02:25:35 You can passively perceive the world.

02:25:38 But it’s fundamentally different if you can do an experiment

02:25:41 and if you can change the world

02:25:43 and you can learn a lot more than a passive observer.

02:25:47 So you can in dialogue, that was your initial example,

02:25:51 you could have an active experiment exchange.

02:25:54 But I think if you’re just a camera watching YouTube,

02:25:57 I think that’s a very different problem

02:26:00 than if you’re a robot that can apply force.

02:26:03 And I think that’s a very different problem

02:26:05 than if you’re a robot that can apply force and touch.

02:26:13 I think it’s important.

02:26:15 Yeah, I think it’s just an exciting area of research.

02:26:18 I think you’re probably right

02:26:19 that this hasn’t been under researched.

02:26:23 To me as a person who’s captivated

02:26:25 by the idea of human robot interaction,

02:26:27 it feels like such a rich opportunity to explore touch.

02:26:34 Not even from a safety perspective,

02:26:35 but like you said, the emotional too.

02:26:38 I mean, safety comes first,

02:26:41 but the next step is like a real human connection.

02:26:48 Even in the industrial setting,

02:26:51 it just feels like it’s nice for the robot.

02:26:55 I don’t know, you might disagree with this,

02:26:58 but because I think it’s important

02:27:01 to see robots as tools often,

02:27:04 but I don’t know,

02:27:06 I think they’re just always going to be more effective

02:27:08 once you humanize them.

02:27:11 Like it’s convenient now to think of them as tools

02:27:14 because we want to focus on the safety,

02:27:16 but I think ultimately to create like a good experience

02:27:22 for the worker, for the person,

02:27:24 there has to be a human element.

02:27:27 I don’t know, for me,

02:27:30 it feels like an industrial robotic arm

02:27:33 would be better if it has a human element.

02:27:34 I think like Rethink Robotics had that idea

02:27:37 with the Baxter and having eyes and so on,

02:27:40 having, I don’t know, I’m a big believer in that.

02:27:45 It’s not my area, but I am also a big believer.

02:27:49 Do you have an emotional connection to Atlas?

02:27:51 Like do you miss him?

02:27:54 I mean, yes, I don’t know if I more so

02:27:59 than if I had a different science project

02:28:01 that I’d worked on super hard, right?

02:28:03 But yeah, I mean, the robot,

02:28:09 we basically had to do heart surgery on the robot

02:28:11 in the final competition because we melted the core.

02:28:18 Yeah, there was something about watching that robot

02:28:20 hanging there.

02:28:20 We know we had to compete with it in an hour

02:28:22 and it was getting its guts ripped out.

02:28:25 Those are all historic moments.

02:28:27 I think if you look back like a hundred years from now,

02:28:32 yeah, I think those are important moments in robotics.

02:28:35 I mean, these are the early days.

02:28:36 You look at like the early days

02:28:37 of a lot of scientific disciplines.

02:28:39 They look ridiculous, they’re full of failure,

02:28:42 but it feels like robotics will be important

02:28:45 in the coming a hundred years.

02:28:48 And these are the early days.

02:28:50 So I think a lot of people are,

02:28:54 look at a brilliant person such as yourself

02:28:57 and are curious about the intellectual journey they’ve took.

02:29:01 Is there maybe three books, technical, fiction,

02:29:06 philosophical that had a big impact on your life

02:29:10 that you would recommend perhaps others reading?

02:29:15 Yeah, so I actually didn’t read that much as a kid,

02:29:18 but I read fairly voraciously now.

02:29:21 There are some recent books that if you’re interested

02:29:24 in this kind of topic, like AI Superpowers by Kai Fu Lee

02:29:29 is just a fantastic read.

02:29:31 You must read that.

02:29:35 Yuval Harari is just, I think that can open your mind.

02:29:40 Sapiens.

02:29:41 Sapiens is the first one, Homo Deus is the second, yeah.

02:29:46 We mentioned it in the book,

02:29:48 Homo Deus is the second, yeah.

02:29:51 We mentioned The Black Swan by Taleb.

02:29:53 I think that’s a good sort of mind opener.

02:29:57 I actually, so there’s maybe a more controversial

02:30:04 recommendation I could give.

02:30:06 Great, we love controversy.

02:30:08 In some sense, it’s so classical it might surprise you,

02:30:11 but I actually recently read Mortimer Adler’s

02:30:16 How to Read a Book, not so long, it was a while ago,

02:30:19 but some people hate that book.

02:30:23 I loved it.

02:30:24 I think we’re in this time right now where,

02:30:30 boy, we’re just inundated with research papers

02:30:33 that you could read on archive with limited peer review

02:30:38 and just this wealth of information.

02:30:40 I don’t know, I think the passion of what you can get

02:30:46 out of a book, a really good book or a really good paper

02:30:49 if you find it, the attitude, the realization

02:30:52 that you’re only gonna find a few that really

02:30:54 are worth all your time, but then once you find them,

02:30:58 you should just dig in and understand it very deeply

02:31:02 and it’s worth marking it up and having the hard copy

02:31:07 writing in the side notes, side margins.

02:31:11 I think that was really, I read it at the right time

02:31:16 where I was just feeling just overwhelmed

02:31:19 with really low quality stuff, I guess.

02:31:23 And similarly, I’m just giving more than three now,

02:31:28 I’m sorry if I’ve exceeded my quota.

02:31:31 But on that topic just real quick is,

02:31:34 so basically finding a few companions to keep

02:31:38 for the rest of your life in terms of papers and books

02:31:41 and so on and those are the ones,

02:31:44 like not doing, what is it, FOMO, fear of missing out,

02:31:48 constantly trying to update yourself,

02:31:50 but really deeply making a life journey

02:31:53 of studying a particular paper, essentially, set of papers.

02:31:57 Yeah, I think when you really start to understand

02:32:02 when you really find something,

02:32:06 which a book that resonates with you

02:32:07 might not be the same book that resonates with me,

02:32:10 but when you really find one that resonates with you,

02:32:13 I think the dialogue that happens and that’s what,

02:32:16 I loved that Adler was saying, I think Socrates and Plato

02:32:20 say the written word is never gonna capture

02:32:25 the beauty of dialogue, right?

02:32:28 But Adler says, no, no, a really good book

02:32:33 is a dialogue between you and the author

02:32:35 and it crosses time and space and I don’t know,

02:32:39 I think it’s a very romantic,

02:32:40 there’s a bunch of like specific advice,

02:32:42 which you can just gloss over,

02:32:44 but the romantic view of how to read

02:32:47 and really appreciate it is so good.

02:32:52 And similarly, teaching,

02:32:53 yeah, I thought a lot about teaching

02:32:58 and so Isaac Asimov, great science fiction writer,

02:33:03 has also actually spent a lot of his career

02:33:05 writing nonfiction, right?

02:33:07 His memoir is fantastic.

02:33:09 He was passionate about explaining things, right?

02:33:12 He wrote all kinds of books

02:33:13 on all kinds of topics in science.

02:33:16 He was known as the great explainer

02:33:17 and I do really resonate with his style

02:33:22 and just his way of talking about,

02:33:28 by communicating and explaining to something

02:33:30 is really the way that you learn something.

02:33:32 I think about problems very differently

02:33:36 because of the way I’ve been given the opportunity

02:33:39 to teach them at MIT.

02:33:42 We have questions asked, the fear of the lecture,

02:33:45 the experience of the lecture

02:33:47 and the questions I get and the interactions

02:33:50 just forces me to be rock solid on these ideas

02:33:53 in a way that if I didn’t have that,

02:33:55 I don’t know, I would be in a different intellectual space.

02:33:58 Also, video, does that scare you

02:34:00 that your lectures are online

02:34:02 and people like me in sweatpants can sit sipping coffee

02:34:05 and watch you give lectures?

02:34:08 I think it’s great.

02:34:09 I do think that something’s changed right now,

02:34:12 which is, right now we’re giving lectures over Zoom.

02:34:16 I mean, giving seminars over Zoom and everything.

02:34:21 I’m trying to figure out, I think it’s a new medium.

02:34:24 I’m trying to figure out how to exploit it.

02:34:28 Yeah, I’ve been quite cynical

02:34:34 about human to human connection over that medium,

02:34:39 but I think that’s because it hasn’t been explored fully

02:34:43 and teaching is a different thing.

02:34:45 Every lecture is a, I’m sorry, every seminar even,

02:34:49 I think every talk I give is an opportunity

02:34:53 to give that differently.

02:34:54 I can deliver content directly into your browser.

02:34:57 You have a WebGL engine right there.

02:35:00 I can throw 3D content into your browser

02:35:04 while you’re listening to me, right?

02:35:06 And I can assume that you have at least

02:35:10 a powerful enough laptop or something to watch Zoom

02:35:13 while I’m doing that, while I’m giving a lecture.

02:35:15 That’s a new communication tool

02:35:18 that I didn’t have last year, right?

02:35:19 And I think robotics can potentially benefit a lot

02:35:24 from teaching that way.

02:35:26 We’ll see, it’s gonna be an experiment this fall.

02:35:28 It’s interesting.

02:35:29 I’m thinking a lot about it.

02:35:30 Yeah, and also like the length of lectures

02:35:35 or the length of like, there’s something,

02:35:38 so like I guarantee you, it’s like 80% of people

02:35:42 who started listening to our conversation

02:35:44 are still listening to now, which is crazy to me.

02:35:48 But so there’s a patience and interest

02:35:51 in long form content, but at the same time,

02:35:53 there’s a magic to forcing yourself to condense

02:35:57 an idea to as short as possible.

02:36:02 As short as possible, like clip,

02:36:04 it can be a part of a longer thing,

02:36:06 but like just like really beautifully condense an idea.

02:36:09 There’s a lot of opportunity there

02:36:11 that’s easier to do in remote with, I don’t know,

02:36:17 with editing too.

02:36:19 Editing is an interesting thing.

02:36:20 Like what, you know, most professors don’t get,

02:36:25 when they give a lecture,

02:36:25 they don’t get to go back and edit out parts,

02:36:28 like crisp it up a little bit.

02:36:31 That’s also, it can do magic.

02:36:34 Like if you remove like five to 10 minutes

02:36:37 from an hour lecture, it can actually,

02:36:41 it can make something special of a lecture.

02:36:43 I’ve seen that in myself and in others too,

02:36:47 because I edit other people’s lectures to extract clips.

02:36:50 It’s like, there’s certain tangents that are like,

02:36:52 that lose, they’re not interesting.

02:36:54 They’re mumbling, they’re just not,

02:36:57 they’re not clarifying, they’re not helpful at all.

02:36:59 And once you remove them, it’s just, I don’t know.

02:37:02 Editing can be magic.

02:37:04 It takes a lot of time.

02:37:05 Yeah, it takes, it depends like what is teaching,

02:37:08 you have to ask.

02:37:09 Yeah, yeah.

02:37:13 Cause I find the editing process is also beneficial

02:37:18 as for teaching, but also for your own learning.

02:37:21 I don’t know if, have you watched yourself?

02:37:23 Yeah, sure.

02:37:24 Have you watched those videos?

02:37:26 I mean, not all of them.

02:37:27 It could be painful to see like how to improve.

02:37:33 So do you find that, I know you segment your podcast.

02:37:37 Do you think that helps people with the,

02:37:40 the attention span aspect of it?

02:37:42 Or is it the segment like sections like,

02:37:44 yeah, we’re talking about this topic, whatever.

02:37:46 Nope, nope, that just helps me.

02:37:48 It’s actually bad.

02:37:49 So, and you’ve been incredible.

02:37:53 So I’m learning, like I’m afraid of conversation.

02:37:56 This is even today, I’m terrified of talking to you.

02:37:59 I mean, it’s something I’m trying to remove for myself.

02:38:04 There’s a guy, I mean, I’ve learned from a lot of people,

02:38:07 but really there’s been a few people

02:38:10 who’s been inspirational to me in terms of conversation.

02:38:14 Whatever people think of him,

02:38:15 Joe Rogan has been inspirational to me

02:38:17 because comedians have been too.

02:38:20 Being able to just have fun and enjoy themselves

02:38:23 and lose themselves in conversation

02:38:25 that requires you to be a great storyteller,

02:38:28 to be able to pull a lot of different pieces

02:38:31 of information together.

02:38:32 But mostly just to enjoy yourself in conversations.

02:38:36 And I’m trying to learn that.

02:38:38 These notes are, you see me looking down.

02:38:41 That’s like a safety blanket

02:38:43 that I’m trying to let go of more and more.

02:38:45 Cool.

02:38:46 So that’s, people love just regular conversation.

02:38:49 That’s what they, the structure is like, whatever.

02:38:52 I would say, I would say maybe like 10 to like,

02:38:57 so there’s a bunch of, you know,

02:38:59 there’s probably a couple of thousand PhD students

02:39:03 listening to this right now, right?

02:39:06 And they might know what we’re talking about.

02:39:09 But there is somebody, I guarantee you right now,

02:39:13 in Russia, some kid who’s just like,

02:39:16 who’s just smoked some weed, is sitting back

02:39:19 and just enjoying the hell out of this conversation.

02:39:22 Not really understanding.

02:39:23 He kind of watched some Boston Dynamics videos.

02:39:25 He’s just enjoying it.

02:39:27 And I salute you, sir.

02:39:29 No, but just like, there’s so much variety of people

02:39:32 that just have curiosity about engineering,

02:39:35 about sciences, about mathematics.

02:39:37 And also like, I should, I mean,

02:39:43 enjoying it is one thing,

02:39:44 but also often notice it inspires people to,

02:39:49 there’s a lot of people who are like

02:39:50 in their undergraduate studies trying to figure out what,

02:39:54 trying to figure out what to pursue.

02:39:56 And these conversations can really spark

02:39:59 the direction of their life.

02:40:01 And in terms of robotics, I hope it does,

02:40:03 because I’m excited about the possibilities

02:40:06 of what robotics brings.

02:40:07 On that topic, do you have advice?

02:40:12 Like what advice would you give

02:40:14 to a young person about life?

02:40:18 A young person about life

02:40:19 or a young person about life in robotics?

02:40:23 It could be in robotics.

02:40:24 Robotics, it could be in life in general.

02:40:26 It could be career.

02:40:28 It could be a relationship advice.

02:40:31 It could be running advice.

02:40:32 Just like they’re, that’s one of the things I see,

02:40:36 like we talked to like 20 year olds.

02:40:38 They’re like, how do I do this thing?

02:40:42 What do I do?

02:40:45 If they come up to you, what would you tell them?

02:40:48 I think it’s an interesting time to be a kid these days.

02:40:53 Everything points to this being sort of a winner,

02:40:57 take all economy and the like.

02:40:59 I think the people that will really excel in my opinion

02:41:04 are going to be the ones that can think deeply

02:41:06 about problems.

02:41:11 You have to be able to ask questions agilely

02:41:13 and use the internet for everything it’s good for

02:41:15 and stuff like this.

02:41:16 And I think a lot of people will develop those skills.

02:41:19 I think the leaders, thought leaders,

02:41:24 robotics leaders, whatever,

02:41:26 are gonna be the ones that can do more

02:41:29 and they can think very deeply and critically.

02:41:32 And that’s a harder thing to learn.

02:41:35 I think one path to learning that is through mathematics,

02:41:38 through engineering.

02:41:41 I would encourage people to start math early.

02:41:44 I mean, I didn’t really start.

02:41:46 I mean, I was always in the better math classes

02:41:50 that I could take,

02:41:51 but I wasn’t pursuing super advanced mathematics

02:41:54 or anything like that until I got to MIT.

02:41:56 I think MIT lit me up

02:41:59 and really started the life that I’m living now.

02:42:05 But yeah, I really want kids to dig deep,

02:42:10 really understand things, building things too.

02:42:12 I mean, pull things apart, put them back together.

02:42:15 Like that’s just such a good way

02:42:17 to really understand things

02:42:19 and expect it to be a long journey, right?

02:42:23 It’s, you don’t have to know everything.

02:42:27 You’re never gonna know everything.

02:42:29 So think deeply and stick with it.

02:42:32 Enjoy the ride, but just make sure you’re not,

02:42:37 yeah, just make sure you’re stopping

02:42:40 to think about why things work.

02:42:43 And it’s true, it’s easy to lose yourself

02:42:45 in the distractions of the world.

02:42:51 We’re overwhelmed with content right now,

02:42:52 but you have to stop and pick some of it

02:42:56 and really understand it.

02:42:58 Yeah, on the book point,

02:43:00 I’ve read Animal Farm by George Orwell

02:43:04 a ridiculous number of times.

02:43:06 So for me, like that book,

02:43:07 I don’t know if it’s a good book in general,

02:43:09 but for me it connects deeply somehow.

02:43:13 It somehow connects, so I was born in the Soviet Union.

02:43:18 So it connects to me into the entirety of the history

02:43:20 of the Soviet Union and to World War II

02:43:23 and to the love and hatred and suffering

02:43:26 that went on there and the corrupting nature of power

02:43:33 and greed and just somehow I just,

02:43:36 that book has taught me more about life

02:43:38 than like anything else.

02:43:39 Even though it’s just like a silly childlike book

02:43:42 about pigs, I don’t know why,

02:43:46 it just connects and inspires.

02:43:49 The same, there’s a few technical books too

02:43:53 and algorithms that just, yeah, you return to often.

02:43:58 I’m with you.

02:44:01 Yeah, there’s, and I’ve been losing that

02:44:04 because of the internet.

02:44:05 I’ve been like going on, I’ve been going on archive

02:44:09 and blog posts and GitHub and the new thing

02:44:12 and you lose your ability to really master an idea.

02:44:18 Right.

02:44:18 Wow.

02:44:19 Exactly right.

02:44:21 What’s a fond memory from childhood?

02:44:24 When baby Russ Tedrick.

02:44:29 Well, I guess I just said that at least my current life

02:44:33 began when I got to MIT.

02:44:36 If I have to go farther than that.

02:44:38 Yeah, what was, was there a life before MIT?

02:44:42 Oh, absolutely, but let me actually tell you

02:44:47 what happened when I first got to MIT

02:44:48 because that I think might be relevant here,

02:44:52 but I had taken a computer engineering degree at Michigan.

02:44:57 I enjoyed it immensely, learned a bunch of stuff.

02:45:00 I liked computers, I liked programming,

02:45:04 but when I did get to MIT and started working

02:45:07 with Sebastian Sung, theoretical physicist,

02:45:10 computational neuroscientist, the culture here

02:45:15 was just different.

02:45:17 It demanded more of me, certainly mathematically

02:45:20 and in the critical thinking.

02:45:22 And I remember the day that I borrowed one of the books

02:45:27 from my advisor’s office and walked down

02:45:29 to the Charles River and was like,

02:45:32 I’m getting my butt kicked.

02:45:36 And I think that’s gonna happen to everybody

02:45:38 who’s doing this kind of stuff.

02:45:40 I think I expected you to ask me the meaning of life.

02:45:46 I think that somehow I think that’s gotta be part of it.

02:45:52 Doing hard things?

02:45:55 Yeah.

02:45:56 Did you consider quitting at any point?

02:45:58 Did you consider this isn’t for me?

02:45:59 No, never that.

02:46:01 I was working hard, but I was loving it.

02:46:07 I think there’s this magical thing

02:46:08 where I’m lucky to surround myself with people

02:46:11 that basically almost every day I’ll see something,

02:46:17 I’ll be told something or something that I realize,

02:46:20 wow, I don’t understand that.

02:46:22 And if I could just understand that,

02:46:24 there’s something else to learn.

02:46:26 That if I could just learn that thing,

02:46:28 I would connect another piece of the puzzle.

02:46:30 And I think that is just such an important aspect

02:46:36 and being willing to understand what you can and can’t do

02:46:40 and loving the journey of going

02:46:43 and learning those other things.

02:46:44 I think that’s the best part.

02:46:47 I don’t think there’s a better way to end it, Russ.

02:46:51 You’ve been an inspiration to me since I showed up at MIT.

02:46:55 Your work has been an inspiration to the world.

02:46:57 This conversation was amazing.

02:46:59 I can’t wait to see what you do next

02:47:01 with robotics, home robots.

02:47:03 I hope to see you work in my home one day.

02:47:05 So thanks so much for talking today, it’s been awesome.

02:47:08 Cheers.

02:47:09 Thanks for listening to this conversation

02:47:11 with Russ Tedrick and thank you to our sponsors,

02:47:14 Magic Spoon Cereal, BetterHelp and ExpressVPN.

02:47:18 Please consider supporting this podcast

02:47:20 by going to magicspoon.com slash Lex

02:47:23 and using code Lex at checkout.

02:47:25 Go into betterhelp.com slash Lex

02:47:27 and signing up at expressvpn.com slash Lex pod.

02:47:32 Click the links, buy the stuff, get the discount.

02:47:36 It really is the best way to support this podcast.

02:47:39 If you enjoy this thing, subscribe on YouTube,

02:47:41 review it with five stars and up a podcast,

02:47:43 support on Patreon or connect with me on Twitter

02:47:46 at Lex Friedman spelled somehow without the E

02:47:50 just F R I D M A N.

02:47:53 And now let me leave you with some words

02:47:55 from Neil deGrasse Tyson talking about robots in space

02:47:58 and the emphasis we humans put

02:48:00 on human based space exploration.

02:48:03 Robots are important.

02:48:05 If I don my pure scientist hat,

02:48:07 I would say just send robots.

02:48:10 I’ll stay down here and get the data.

02:48:12 But nobody’s ever given a parade for a robot.

02:48:15 Nobody’s ever named a high school after a robot.

02:48:17 So when I don my public educator hat,

02:48:20 I have to recognize the elements of exploration

02:48:22 that excite people.

02:48:24 It’s not only the discoveries and the beautiful photos

02:48:26 that come down from the heavens.

02:48:29 It’s the vicarious participation in discovery itself.

02:48:33 Thank you for listening and hope to see you next time.