Transcript
00:00:00 The following is a conversation with Demis Hassabis,
00:00:03 CEO and co founder of DeepMind,
00:00:06 a company that has published and built
00:00:08 some of the most incredible artificial intelligence systems
00:00:12 in the history of computing,
00:00:14 including AlphaZero that learned all by itself
00:00:18 to play the game of go better than any human in the world
00:00:21 and AlphaFold2 that solved protein folding.
00:00:25 Both tasks considered nearly impossible
00:00:28 for a very long time.
00:00:31 Demis is widely considered to be
00:00:33 one of the most brilliant and impactful humans
00:00:35 in the history of artificial intelligence
00:00:38 and science and engineering in general.
00:00:41 This was truly an honor and a pleasure for me
00:00:44 to finally sit down with him for this conversation.
00:00:47 And I’m sure we will talk many times again in the future.
00:00:51 This is the Lux Readman podcast.
00:00:53 To support it, please check out our sponsors
00:00:55 in the description.
00:00:56 And now, dear friends, here’s Demis Hassabis.
00:01:01 Let’s start with a bit of a personal question.
00:01:04 Am I an AI program you wrote to interview people
00:01:07 until I get good enough to interview you?
00:01:11 Well, I’d be impressed if you were.
00:01:13 I’d be impressed by myself if you were.
00:01:14 I don’t think we’re quite up to that yet,
00:01:16 but maybe you’re from the future, Lex.
00:01:18 If you did, would you tell me?
00:01:20 Is that a good thing to tell a language model
00:01:23 that’s tasked with interviewing
00:01:25 that it is, in fact, AI?
00:01:27 Maybe we’re in a kind of meta Turing test.
00:01:29 Probably it would be a good idea not to tell you,
00:01:32 so it doesn’t change your behavior, right?
00:01:33 This is a kind of link.
00:01:35 Heisenberg uncertainty principle situation.
00:01:37 If I told you, you’d behave differently.
00:01:39 Maybe that’s what’s happening with us, of course.
00:01:40 This is a benchmark from the future
00:01:42 where they replay 2022 as a year
00:01:46 before AIs were good enough yet,
00:01:49 and now we want to see, is it gonna pass?
00:01:52 Exactly.
00:01:52 If I was such a program,
00:01:56 would you be able to tell, do you think?
00:01:57 So to the Turing test question,
00:01:59 you’ve talked about the benchmark for solving intelligence.
00:02:05 What would be the impressive thing?
00:02:07 You’ve talked about winning a Nobel Prize
00:02:09 and AIS system winning a Nobel Prize,
00:02:11 but I still return to the Turing test as a compelling test,
00:02:14 the spirit of the Turing test as a compelling test.
00:02:17 Yeah, the Turing test, of course,
00:02:18 it’s been unbelievably influential,
00:02:20 and Turing’s one of my all time heroes,
00:02:22 but I think if you look back at the 1950 paper,
00:02:24 his original paper and read the original,
00:02:27 you’ll see, I don’t think he meant it
00:02:28 to be a rigorous formal test.
00:02:30 I think it was more like a thought experiment,
00:02:32 almost a bit of philosophy he was writing
00:02:34 if you look at the style of the paper,
00:02:36 and you can see he didn’t specify it very rigorously.
00:02:38 So for example, he didn’t specify the knowledge
00:02:41 that the expert or judge would have.
00:02:45 How much time would they have to investigate this?
00:02:48 So these are important parameters
00:02:49 if you were gonna make it a true sort of formal test.
00:02:54 And by some measures, people claim the Turing test passed
00:02:58 several, a decade ago, I remember someone claiming that
00:03:00 with a kind of very bog standard, normal logic model,
00:03:06 because they pretended it was a kid.
00:03:08 So the judges thought that the machine was a child.
00:03:13 So that would be very different
00:03:15 from an expert AI person interrogating a machine
00:03:18 and knowing how it was built and so on.
00:03:20 So I think we should probably move away from that
00:03:24 as a formal test and move more towards a general test
00:03:28 where we test the AI capabilities on a range of tasks
00:03:32 and see if it reaches human level or above performance
00:03:35 on maybe thousands, perhaps even millions of tasks
00:03:38 eventually and cover the entire sort of cognitive space.
00:03:41 So I think for its time,
00:03:44 it was an amazing thought experiment.
00:03:45 And also 1950s, obviously there’s barely the dawn
00:03:48 of the computer age.
00:03:49 So of course he only thought about text
00:03:51 and now we have a lot more different inputs.
00:03:54 So yeah, maybe the better thing to test
00:03:57 is the generalizability, so across multiple tasks.
00:03:59 But I think it’s also possible as systems like Gato show
00:04:04 that eventually that might map right back to language.
00:04:08 So you might be able to demonstrate your ability
00:04:10 to generalize across tasks by then communicating
00:04:14 your ability to generalize across tasks,
00:04:17 which is kind of what we do through conversation anyway
00:04:19 when we jump around.
00:04:20 Ultimately what’s in there in that conversation
00:04:23 is not just you moving around knowledge,
00:04:27 it’s you moving around like these entirely different
00:04:30 modalities of understanding that ultimately map
00:04:34 to your ability to operate successfully
00:04:38 in all of these domains, which you can think of as tasks.
00:04:42 Yeah, I think certainly we as humans use language
00:04:45 as our main generalization communication tool.
00:04:48 So I think we end up thinking in language
00:04:51 and expressing our solutions in language.
00:04:54 So it’s going to be a very powerful mode in which
00:04:58 to explain the system, to explain what it’s doing.
00:05:03 But I don’t think it’s the only modality that matters.
00:05:07 So I think there’s going to be a lot of different ways
00:05:10 to express capabilities other than just language.
00:05:15 Yeah, visual, robotics, body language,
00:05:21 yeah, actions, the interactive aspect of all that.
00:05:23 That’s all part of it.
00:05:24 But what’s interesting with Gato is that
00:05:27 it’s sort of pushing prediction to the maximum
00:05:30 in terms of like mapping arbitrary sequences
00:05:33 to other sequences and sort of just predicting
00:05:35 what’s going to happen next.
00:05:36 So prediction seems to be fundamental to intelligence.
00:05:41 And what you’re predicting doesn’t so much matter.
00:05:44 Yeah, it seems like you can generalize that quite well.
00:05:46 So obviously language models predict the next word,
00:05:49 Gato predicts potentially any action or any token.
00:05:53 And it’s just the beginning really.
00:05:55 It’s our most general agent one could call it so far,
00:05:58 but that itself can be scaled up massively more
00:06:01 than we’ve done so far.
00:06:02 And obviously we’re in the middle of doing that.
00:06:04 But the big part of solving AGI is creating benchmarks
00:06:08 that help us get closer and closer,
00:06:11 sort of creating benchmarks that test the generalizability.
00:06:14 And it’s just still interesting that this fella,
00:06:17 Alan Turing, was one of the first
00:06:20 and probably still one of the only people
00:06:22 that was trying, maybe philosophically,
00:06:25 but was trying to formulate a benchmark
00:06:26 that could be followed.
00:06:27 It is, even though it’s fuzzy,
00:06:30 it’s still sufficiently rigorous
00:06:32 to where you can run that test.
00:06:33 And I still think something like the Turing test
00:06:36 will, at the end of the day,
00:06:38 be the thing that truly impresses other humans
00:06:42 so that you can have a close friend who’s an AI system.
00:06:46 And for that friend to be a good friend,
00:06:48 they’re going to have to be able to play StarCraft
00:06:53 and they’re gonna have to do all of these tasks,
00:06:56 get you a beer, so the robotics tasks,
00:07:00 play games with you, use language,
00:07:03 humor, all of those kinds of things.
00:07:04 But that ultimately can boil down to language.
00:07:08 It feels like, not in terms of the AI community,
00:07:11 but in terms of the actual impact
00:07:13 of general intelligence on the world,
00:07:14 it feels like language will be the place
00:07:16 where it truly shines.
00:07:18 I think so, because it’s such an important
00:07:20 kind of input output for us.
00:07:22 I think you’re right.
00:07:23 I think the Turing test,
00:07:24 the kind of the philosophy behind it,
00:07:27 which is the idea of can a machine mimic
00:07:31 the behaviors of a human?
00:07:34 And I would say wider than just language and text,
00:07:38 then in terms of actions and everything else,
00:07:41 creativity, all these things,
00:07:42 then if it can sort of match or exceed
00:07:45 human cognitive capabilities,
00:07:47 then I think we have a true intelligence.
00:07:49 So from that perspective, you’re right.
00:07:51 I think he did formulate the right kind of setup.
00:07:55 I just, I think there’ll be a kind of humor
00:07:59 in the AI systems of the future looking back
00:08:01 to this conversation and thinking about the Turing test
00:08:06 and also thinking about by that time,
00:08:09 they would know which year they were finally able
00:08:11 to sort of cross the threshold of human level intelligence
00:08:15 and think how funny it is that we humans
00:08:18 were still confused about this whole problem
00:08:22 that has been long ago been solved.
00:08:24 Well, hello to the AI agents of the future.
00:08:27 Anyway, so going back to your journey,
00:08:30 when did you fall in love with programming first?
00:08:33 Well, I was pretty young age actually.
00:08:35 So, I started off, actually games was my first love.
00:08:40 So starting to play chess when I was around four years old
00:08:43 and then it was actually with winnings
00:08:46 from a chess competition that I managed
00:08:48 to buy my first chess computer
00:08:49 when I was about eight years old.
00:08:50 It was a ZX Spectrum, which was hugely popular
00:08:53 in the UK at the time.
00:08:54 And it was amazing machine because I think it trained
00:08:58 a whole generation of programmers in the UK
00:09:00 because it was so accessible.
00:09:02 You know, you literally switched it on
00:09:03 and there was the basic prompt
00:09:05 and you could just get going.
00:09:06 And my parents didn’t really know anything about computers.
00:09:09 So, but because it was my money from a chess competition,
00:09:12 I could say I wanted to buy it.
00:09:15 And then, you know, I just went to bookstores,
00:09:17 got books on programming and started typing in,
00:09:22 you know, the programming code.
00:09:23 And then of course, once you start doing that,
00:09:26 you start adjusting it and then making your own games.
00:09:29 And that’s when I fell in love with computers
00:09:30 and realized that they were a very magical device.
00:09:34 In a way, I kind of, I wouldn’t have been able
00:09:36 to explain this at the time,
00:09:37 but I felt that they were sort of almost
00:09:38 a magical extension of your mind.
00:09:40 I always had this feeling and I’ve always loved this
00:09:43 about computers that you can set them off doing something,
00:09:46 some task for you, you can go to sleep,
00:09:48 come back the next day and it’s solved.
00:09:51 You know, that feels magical to me.
00:09:53 So, I mean, all machines do that to some extent.
00:09:55 They all enhance our natural capabilities.
00:09:57 Obviously cars make us, allow us to move faster
00:10:00 than we can run, but this was a machine to extend the mind.
00:10:04 And then of course, AI is the ultimate expression
00:10:08 of what a machine may be able to do or learn.
00:10:11 So very naturally for me, that thought extended
00:10:14 into AI quite quickly.
00:10:16 Do you remember the programming language
00:10:18 that was first started and was it special to the machine?
00:10:22 No, I think it was just basic on the ZX Spectrum.
00:10:25 I don’t know what specific form it was.
00:10:27 And then later on I got a Commodore Amiga,
00:10:29 which was a fantastic machine.
00:10:32 Now you’re just showing off.
00:10:33 So yeah, well, lots of my friends had Atari STs
00:10:36 and I managed to get Amigas, it was a bit more powerful
00:10:38 and that was incredible and used to do programming
00:10:42 in assembler and also Amos basic,
00:10:46 this specific form of basic, it was incredible actually.
00:10:49 So I learned all my coding skills.
00:10:51 And when did you fall in love with AI?
00:10:53 So when did you first start to gain an understanding
00:10:56 that you can not just write programs
00:10:58 that do some mathematical operations for you
00:11:01 while you sleep, but something that’s akin
00:11:05 to bringing an entity to life,
00:11:08 sort of a thing that can figure out something
00:11:11 more complicated than a simple mathematical operation.
00:11:15 Yeah, so there was a few stages for me
00:11:17 all while I was very young.
00:11:18 So first of all, as I was trying to improve
00:11:21 at playing chess, I was captaining
00:11:23 various England junior chess teams.
00:11:24 And at the time when I was about maybe 10, 11 years old,
00:11:27 I was gonna become a professional chess player.
00:11:29 That was my first thought.
00:11:32 So that dream was there to try to get
00:11:34 to the highest levels of chess.
00:11:35 Yeah, so when I was about 12 years old,
00:11:39 I got to master standard and I was second highest rated
00:11:41 player in the world to Judith Polgar,
00:11:42 who obviously ended up being an amazing chess player
00:11:45 and a world women’s champion.
00:11:48 And when I was trying to improve at chess,
00:11:50 where what you do is you obviously, first of all,
00:11:52 you’re trying to improve your own thinking processes.
00:11:55 So that leads you to thinking about thinking,
00:11:58 how is your brain coming up with these ideas?
00:12:00 Why is it making mistakes?
00:12:01 How can you improve that thought process?
00:12:04 But the second thing is that you,
00:12:06 it was just the beginning, this was like in the early 80s,
00:12:09 mid 80s of chess computers.
00:12:11 If you remember, they were physical balls
00:12:12 like the one we have in front of us.
00:12:14 And you press down the squares.
00:12:17 And I think Kasparov had a branded version of it
00:12:19 that I got.
00:12:21 And you used to, they’re not as strong as they are today,
00:12:24 but they were pretty strong and you used to practice
00:12:27 against them to try and improve your openings
00:12:30 and other things.
00:12:31 And so I remember, I think I probably got my first one,
00:12:33 I was around 11 or 12.
00:12:34 And I remember thinking, this is amazing,
00:12:37 how has someone programmed this chess board to play chess?
00:12:42 And it was very formative book I bought,
00:12:45 which was called The Chess Computer Handbook
00:12:47 by David Levy.
00:12:49 This thing came out in 1984 or something.
00:12:50 So I must’ve got it when I was about 11, 12.
00:12:52 And it explained fully how these chess programs were made.
00:12:56 And I remember my first AI program
00:12:57 being programming my Amiga.
00:13:00 It couldn’t, it wasn’t powerful enough to play chess.
00:13:02 I couldn’t write a whole chess program,
00:13:04 but I wrote a program for it to play Othello or reverse it,
00:13:07 sometimes called I think in the US.
00:13:09 And so a slightly simpler game than chess,
00:13:11 but I used all of the principles that chess programs had,
00:13:14 alpha, beta, search, all of that.
00:13:16 And that was my first AI program.
00:13:17 I remember that very well, I was around 12 years old.
00:13:19 So that brought me into AI.
00:13:21 And then the second part was later on,
00:13:24 when I was around 16, 17,
00:13:25 and I was writing games professionally, designing games,
00:13:28 writing a game called Theme Park,
00:13:30 which had AI as a core gameplay component
00:13:34 as part of the simulation.
00:13:35 And it sold millions of copies around the world.
00:13:38 And people loved the way that the AI,
00:13:41 even though it was relatively simple
00:13:42 by today’s AI standards,
00:13:44 was reacting to the way you as the player played it.
00:13:47 So it was called a sandbox game.
00:13:49 So it was one of the first types of games like that,
00:13:51 along with SimCity.
00:13:52 And it meant that every game you played was unique.
00:13:55 Is there something you could say just on a small tangent
00:13:58 about really impressive AI
00:14:02 from a game design, human enjoyment perspective,
00:14:06 really impressive AI that you’ve seen in games
00:14:09 and maybe what does it take to create an AI system?
00:14:12 And how hard of a problem is that?
00:14:14 So a million questions just as a brief tangent.
00:14:18 Well, look, I think games have been significant in my life
00:14:23 for three reasons.
00:14:23 So first of all, I was playing them
00:14:26 and training myself on games when I was a kid.
00:14:28 Then I went through a phase of designing games
00:14:31 and writing AI for games.
00:14:33 So all the games I professionally wrote
00:14:35 had AI as a core component.
00:14:37 And that was mostly in the 90s.
00:14:40 And the reason I was doing that in games industry
00:14:42 was at the time the games industry,
00:14:45 I think was the cutting edge of technology.
00:14:47 So whether it was graphics with people like John Carmack
00:14:49 and Quake and those kinds of things or AI,
00:14:53 I think actually all the action was going on in games.
00:14:56 And we’re still reaping the benefits of that
00:14:58 even with things like GPUs, which I find ironic
00:15:01 was obviously invented for graphics, computer graphics,
00:15:03 but then turns out to be amazingly useful for AI.
00:15:06 It just turns out everything’s a matrix multiplication
00:15:08 it appears in the whole world.
00:15:11 So I think games at the time had the most cutting edge AI.
00:15:15 And a lot of the games, I was involved in writing.
00:15:19 So there was a game called Black and White,
00:15:21 which was one game I was involved with
00:15:22 in the early stages of,
00:15:24 which I still think is the most impressive example
00:15:28 of reinforcement learning in a computer game.
00:15:30 So in that game, you trained a little pet animal and…
00:15:34 It’s a brilliant game.
00:15:35 And it sort of learned from how you were treating it.
00:15:37 So if you treated it badly, then it became mean.
00:15:40 And then it would be mean to your villagers
00:15:42 and your population, the sort of the little tribe
00:15:45 that you were running.
00:15:47 But if you were kind to it, then it would be kind.
00:15:49 And people were fascinated by how that works.
00:15:51 And so was I to be honest with the way it kind of developed.
00:15:54 And…
00:15:55 Especially the mapping to good and evil.
00:15:57 Yeah.
00:15:58 Made you realize, made me realize that you can sort of
00:16:01 in the choices you make can define where you end up.
00:16:07 And that means all of us are capable of the good, evil.
00:16:12 It all matters in the different choices
00:16:15 along the trajectory to those places that you make.
00:16:18 It’s fascinating.
00:16:19 I mean, games can do that philosophically to you.
00:16:21 And it’s rare.
00:16:22 It seems rare.
00:16:23 Yeah.
00:16:23 Well, games are, I think, a unique medium
00:16:24 because you as the player,
00:16:26 you’re not just passively consuming the entertainment,
00:16:30 right?
00:16:30 You’re actually actively involved as an agent.
00:16:34 So I think that’s what makes it in some ways
00:16:36 can be more visceral than other mediums
00:16:38 like films and books.
00:16:40 So the second, so that was designing AI in games.
00:16:42 And then the third use we’ve used of AI
00:16:46 is in DeepMind from the beginning,
00:16:48 which is using games as a testing ground
00:16:50 for proving out AI algorithms and developing AI algorithms.
00:16:55 And that was a sort of a core component
00:16:58 of our vision at the start of DeepMind
00:17:00 was that we would use games very heavily
00:17:03 as our main testing ground, certainly to begin with,
00:17:06 because it’s super efficient to use games.
00:17:08 And also, it’s very easy to have metrics
00:17:11 to see how well your systems are improving
00:17:14 and what direction your ideas are going in
00:17:15 and whether you’re making incremental improvements.
00:17:18 And because those games are often rooted
00:17:20 in something that humans did for a long time beforehand,
00:17:23 there’s already a strong set of rules.
00:17:26 Like it’s already a damn good benchmark.
00:17:28 Yes, it’s really good for so many reasons
00:17:30 because you’ve got clear measures
00:17:32 of how good humans can be at these things.
00:17:35 And in some cases like Go,
00:17:36 we’ve been playing it for thousands of years
00:17:39 and often they have scores or at least win conditions.
00:17:43 So it’s very easy for reward learning systems
00:17:45 to get a reward.
00:17:46 It’s very easy to specify what that reward is.
00:17:49 And also at the end, it’s easy to test externally
00:17:54 at how strong is your system by of course,
00:17:56 playing against the world’s strongest players at those games.
00:18:00 So it’s so good for so many reasons
00:18:02 and it’s also very efficient to run potentially millions
00:18:05 of simulations in parallel on the cloud.
00:18:08 So I think there’s a huge reason why we were so successful
00:18:12 back in starting out 2010,
00:18:14 how come we were able to progress so quickly
00:18:16 because we’ve utilized games.
00:18:18 And at the beginning of DeepMind,
00:18:21 we also hired some amazing game engineers
00:18:24 who I knew from my previous lives in the games industry.
00:18:28 And that helped to bootstrap us very quickly.
00:18:30 And plus it’s somehow super compelling
00:18:33 almost at a philosophical level of man versus machine
00:18:38 over a chess board or a Go board.
00:18:41 And especially given that the entire history of AI
00:18:43 is defined by people saying it’s gonna be impossible
00:18:45 to make a machine that beats a human being in chess.
00:18:50 And then once that happened,
00:18:53 people were certain when I was coming up in AI
00:18:55 that Go is not a game that can be solved
00:18:58 because of the combinatorial complexity is just too,
00:19:02 it’s no matter how much Moore’s law you have,
00:19:06 compute is just never going to be able
00:19:08 to crack the game of Go.
00:19:10 And so then there’s something compelling about facing,
00:19:14 sort of taking on the impossibility of that task
00:19:18 from the AI researcher perspective,
00:19:22 engineer perspective, and then as a human being,
00:19:24 just observing this whole thing.
00:19:27 Your beliefs about what you thought was impossible
00:19:32 being broken apart,
00:19:35 it’s humbling to realize we’re not as smart as we thought.
00:19:40 It’s humbling to realize that the things we think
00:19:43 are impossible now perhaps will be done in the future.
00:19:47 There’s something really powerful about a game,
00:19:50 AI system beating human being in a game
00:19:52 that drives that message home
00:19:55 for like millions, billions of people,
00:19:58 especially in the case of Go.
00:19:59 Sure.
00:20:00 Well, look, I think it’s,
00:20:01 I mean, it has been a fascinating journey
00:20:03 and especially as I think about it from,
00:20:06 I can understand it from both sides,
00:20:08 both as the AI, creators of the AI,
00:20:13 but also as a games player originally.
00:20:15 So, it was a really interesting,
00:20:17 I mean, it was a fantastic, but also somewhat
00:20:21 bittersweet moment, the AlphaGo match for me,
00:20:24 seeing that and being obviously heavily involved in that.
00:20:29 But as you say, chess has been the,
00:20:32 I mean, Kasparov, I think rightly called it
00:20:34 the Drosophila of intelligence, right?
00:20:37 So, it’s sort of, I love that phrase
00:20:39 and I think he’s right because chess has been
00:20:42 hand in hand with AI from the beginning
00:20:45 of the whole field, right?
00:20:47 So, I think every AI practitioner,
00:20:49 starting with Turing and Claude Shannon and all those,
00:20:52 the sort of forefathers of the field,
00:20:56 tried their hand at writing a chess program.
00:20:58 I’ve got original edition of Claude Shannon’s
00:21:01 first chess program, I think it was 1949,
00:21:04 the original sort of paper.
00:21:06 And they all did that and Turing famously wrote
00:21:09 a chess program, but all the computers around them
00:21:12 were obviously too slow to run it.
00:21:13 So, he had to run, he had to be the computer, right?
00:21:16 So, he literally, I think spent two or three days
00:21:18 running his own program by hand with pencil and paper
00:21:21 and playing a friend of his with his chess program.
00:21:24 So, of course, Deep Blue was a huge moment,
00:21:28 beating Kasparov, but actually when that happened,
00:21:31 I remember that very vividly, of course,
00:21:34 because it was chess and computers and AI,
00:21:36 all the things I loved and I was at college at the time.
00:21:39 But I remember coming away from that,
00:21:40 being more impressed by Kasparov’s mind
00:21:43 than I was by Deep Blue.
00:21:44 Because here was Kasparov with his human mind,
00:21:47 not only could he play chess more or less
00:21:49 to the same level as this brute of a calculation machine,
00:21:53 but of course, Kasparov can do everything else
00:21:55 humans can do, ride a bike, talk many languages,
00:21:57 do politics, all the rest of the amazing things
00:21:59 that Kasparov does.
00:22:00 And so, with the same brain.
00:22:03 And yet Deep Blue, brilliant as it was at chess,
00:22:07 it’d been hand coded for chess and actually had distilled
00:22:12 the knowledge of chess grandmasters into a cool program,
00:22:16 but it couldn’t do anything else.
00:22:18 It couldn’t even play a strictly simpler game
00:22:20 like tic tac toe.
00:22:21 So, something to me was missing from intelligence
00:22:25 from that system that we would regard as intelligence.
00:22:28 And I think it was this idea of generality
00:22:30 and also learning.
00:22:33 So, and that’s obviously what we tried to do with AlphaGo.
00:22:36 Yeah, with AlphaGo and AlphaZero, MuZero,
00:22:38 and then God and all the things that we’ll get into
00:22:42 some parts of, there’s just a fascinating trajectory here.
00:22:45 But let’s just stick on chess briefly.
00:22:48 On the human side of chess, you’ve proposed that
00:22:51 from a game design perspective,
00:22:53 the thing that makes chess compelling as a game
00:22:57 is that there’s a creative tension between a bishop
00:23:01 and the knight.
00:23:02 Can you explain this?
00:23:04 First of all, it’s really interesting to think about
00:23:06 what makes a game compelling,
00:23:08 makes it stick across centuries.
00:23:12 Yeah, I was sort of thinking about this,
00:23:13 and actually a lot of even amazing chess players
00:23:15 don’t think about it necessarily
00:23:16 from a game’s designer point of view.
00:23:18 So, it’s with my game design hat on
00:23:20 that I was thinking about this, why is chess so compelling?
00:23:23 And I think a critical reason is the dynamicness
00:23:27 of the different kind of chess positions you can have,
00:23:30 whether they’re closed or open and other things,
00:23:32 comes from the bishop and the knight.
00:23:33 So, if you think about how different
00:23:36 the capabilities of the bishop and knight are
00:23:39 in terms of the way they move,
00:23:40 and then somehow chess has evolved
00:23:43 to balance those two capabilities more or less equally.
00:23:46 So, they’re both roughly worth three points each.
00:23:48 So, you think that dynamics is always there
00:23:50 and then the rest of the rules
00:23:51 are kind of trying to stabilize the game.
00:23:53 Well, maybe, I mean, it’s sort of,
00:23:55 I don’t know, it’s chicken and egg situation,
00:23:56 probably both came together.
00:23:57 But the fact that it’s got to this beautiful equilibrium
00:24:00 where you can have the bishop and knight
00:24:02 that are so different in power,
00:24:04 but so equal in value across the set
00:24:06 of the universe of all positions, right?
00:24:09 Somehow they’ve been balanced by humanity
00:24:11 over hundreds of years,
00:24:13 I think gives the game the creative tension
00:24:16 that you can swap the bishop and knights
00:24:19 for a bishop for a knight,
00:24:20 and they’re more or less worth the same,
00:24:22 but now you aim for a different type of position.
00:24:24 If you have the knight, you want a closed position.
00:24:26 If you have the bishop, you want an open position.
00:24:28 So, I think that creates
00:24:29 a lot of the creative tension in chess.
00:24:30 So, some kind of controlled creative tension.
00:24:34 From an AI perspective,
00:24:35 do you think AI systems could eventually design games
00:24:38 that are optimally compelling to humans?
00:24:41 Well, that’s an interesting question.
00:24:42 Sometimes I get asked about AI and creativity,
00:24:46 and the way I answered that is relevant to that question,
00:24:48 which is that I think there are different levels
00:24:51 of creativity, one could say.
00:24:52 So, I think if we define creativity
00:24:55 as coming up with something original, right,
00:24:57 that’s useful for a purpose,
00:24:59 then I think the kind of lowest level of creativity
00:25:02 is like an interpolation.
00:25:03 So, an averaging of all the examples you see.
00:25:06 So, maybe a very basic AI system could say
00:25:08 you could have that.
00:25:09 So, you show it millions of pictures of cats,
00:25:11 and then you say, give me an average looking cat, right?
00:25:13 Generate me an average looking cat.
00:25:15 I would call that interpolation.
00:25:17 Then there’s extrapolation,
00:25:18 which something like AlphaGo showed.
00:25:20 So, AlphaGo played millions of games of Go against itself,
00:25:24 and then it came up with brilliant new ideas
00:25:26 like Move 37 in game two, brilliant motif strategies in Go
00:25:30 that no humans had ever thought of,
00:25:32 even though we’ve played it for thousands of years
00:25:34 and professionally for hundreds of years.
00:25:36 So, that I call that extrapolation,
00:25:38 but then there’s still a level above that,
00:25:41 which is, you could call out of the box thinking
00:25:44 or true innovation, which is, could you invent Go, right?
00:25:47 Could you invent chess and not just come up
00:25:49 with a brilliant chess move or brilliant Go move,
00:25:51 but can you actually invent chess
00:25:53 or something as good as chess or Go?
00:25:55 And I think one day AI could, but then what’s missing
00:26:00 is how would you even specify that task
00:26:02 to a program right now?
00:26:04 And the way I would do it if I was telling a human to do it
00:26:07 or a human games designer to do it is I would say,
00:26:10 something like Go, I would say, come up with a game
00:26:14 that only takes five minutes to learn,
00:26:16 which Go does because it’s got simple rules,
00:26:17 but many lifetimes to master, right?
00:26:20 Or impossible to master in one lifetime
00:26:22 because it’s so deep and so complex.
00:26:23 And then it’s aesthetically beautiful.
00:26:26 And also it can be completed in three or four hours
00:26:30 of gameplay time, which is useful for us in a human day.
00:26:35 And so you might specify these sort of high level concepts
00:26:38 like that, and then with that
00:26:40 and then maybe a few other things,
00:26:42 one could imagine that Go satisfies those constraints.
00:26:47 But the problem is that we’re not able
00:26:49 to specify abstract notions like that,
00:26:53 high level abstract notions like that yet to our AI systems.
00:26:57 And I think there’s still something missing there
00:26:58 in terms of high level concepts or abstractions
00:27:01 that they truly understand
00:27:03 and they’re combinable and compositional.
00:27:06 So for the moment, I think AI is capable
00:27:09 of doing interpolation and extrapolation,
00:27:11 but not true invention.
00:27:13 So coming up with rule sets and optimizing
00:27:18 with complicated objectives around those rule sets,
00:27:20 we can’t currently do.
00:27:22 But you could take a specific rule set
00:27:25 and then run a kind of self play experiment
00:27:28 to see how long, just observe how an AI system
00:27:32 from scratch learns, how long is that journey of learning?
00:27:35 And maybe if it satisfies some of those other things
00:27:39 you mentioned in terms of quickness to learn and so on,
00:27:41 and you could see a long journey to master
00:27:44 for even an AI system, then you could say
00:27:46 that this is a promising game.
00:27:49 But it would be nice to do almost like AlphaCode
00:27:51 so programming rules.
00:27:53 So generating rules that automate even that part
00:27:59 of the generation of rules.
00:28:00 So I have thought about systems actually
00:28:02 that I think would be amazing for a games designer.
00:28:05 If you could have a system that takes your game,
00:28:09 plays it tens of millions of times, maybe overnight,
00:28:11 and then self balances the rules better.
00:28:13 So it tweaks the rules and maybe the equations
00:28:18 and the parameters so that the game is more balanced,
00:28:22 the units in the game or some of the rules could be tweaked.
00:28:26 So it’s a bit of like giving a base set
00:28:28 and then allowing Monte Carlo Tree Search
00:28:30 or something like that to sort of explore it.
00:28:33 And I think that would be super powerful tool actually
00:28:37 for balancing, auto balancing a game,
00:28:39 which usually takes thousands of hours
00:28:42 from hundreds of human games testers normally
00:28:44 to balance a game like StarCraft,
00:28:47 which is Blizzard are amazing at balancing their games,
00:28:50 but it takes them years and years and years.
00:28:52 So one could imagine at some point
00:28:54 when this stuff becomes efficient enough
00:28:57 to you might be able to do that like overnight.
00:28:59 Do you think a game that is optimal designed by an AI system
00:29:05 would look very much like a planet earth?
00:29:09 Maybe, maybe it’s only the sort of game
00:29:11 I would love to make is, and I’ve tried in my games career,
00:29:16 the games design career, my first big game
00:29:18 was designing a theme park, an amusement park.
00:29:21 Then with games like Republic, I tried to have games
00:29:25 where we designed whole cities and allowed you to play in.
00:29:28 So, and of course people like Will Wright
00:29:30 have written games like SimEarth,
00:29:32 trying to simulate the whole of earth, pretty tricky,
00:29:35 but I think.
00:29:36 SimEarth, I haven’t actually played that one.
00:29:37 So what is it?
00:29:38 Does it incorporate of evolution or?
00:29:40 Yeah, it has evolution and it sort of tries to,
00:29:43 it sort of treats it as an entire biosphere,
00:29:45 but from quite high level.
00:29:47 So.
00:29:48 It’d be nice to be able to sort of zoom in,
00:29:50 zoom out and zoom in.
00:29:51 Exactly, exactly.
00:29:52 So obviously it couldn’t do, that was in the 90s.
00:29:53 I think he wrote that in the 90s.
00:29:54 So it couldn’t, it wasn’t able to do that,
00:29:57 but that would be obviously the ultimate sandbox game.
00:30:00 Of course.
00:30:01 On that topic, do you think we’re living in a simulation?
00:30:04 Yes, well, so, okay.
00:30:06 So I.
00:30:07 We’re gonna jump around from the absurdly philosophical
00:30:09 to the technical.
00:30:10 Sure, sure, very, very happy to.
00:30:11 So I think my answer to that question
00:30:13 is a little bit complex because there is simulation theory,
00:30:17 which obviously Nick Bostrom,
00:30:18 I think famously first proposed.
00:30:21 And I don’t quite believe it in that sense.
00:30:24 So in the sense that are we in some sort of computer game
00:30:29 or have our descendants somehow recreated earth
00:30:34 in the 21st century and some,
00:30:36 for some kind of experimental reason.
00:30:38 I think that, but I do think that we,
00:30:41 that we might be, that the best way to understand physics
00:30:45 and the universe is from a computational perspective.
00:30:49 So understanding it as an information universe
00:30:52 and actually information being the most fundamental unit
00:30:56 of reality rather than matter or energy.
00:30:59 So a physicist would say, you know, matter or energy,
00:31:02 you know, E equals MC squared.
00:31:03 These are the things that are the fundamentals
00:31:06 of the universe.
00:31:07 I’d actually say information,
00:31:09 which of course itself can be,
00:31:11 can specify energy or matter, right?
00:31:13 Matter is actually just, you know,
00:31:14 we’re just out the way our bodies
00:31:16 and the molecules in our body are arranged as information.
00:31:19 So I think information may be the most fundamental way
00:31:23 to describe the universe.
00:31:24 And therefore you could say we’re in some sort of simulation
00:31:28 because of that.
00:31:29 But I don’t, I do, I’m not,
00:31:31 I’m not really a subscriber to the idea that, you know,
00:31:34 these are sort of throw away billions of simulations around.
00:31:36 I think this is actually very critical and possibly unique,
00:31:40 this simulation.
00:31:41 This particular one.
00:31:42 Yes.
00:31:43 And you just mean treating the universe as a computer
00:31:48 that’s processing and modifying information
00:31:52 is a good way to solve the problems of physics,
00:31:54 of chemistry, of biology,
00:31:57 and perhaps of humanity and so on.
00:31:59 Yes, I think understanding physics
00:32:02 in terms of information theory
00:32:04 might be the best way to really understand
00:32:07 what’s going on here.
00:32:09 From our understanding of a universal Turing machine,
00:32:13 from our understanding of a computer,
00:32:15 do you think there’s something outside
00:32:17 of the capabilities of a computer
00:32:19 that is present in our universe?
00:32:21 You have a disagreement with Roger Penrose
00:32:23 about the nature of consciousness.
00:32:25 He thinks that consciousness is more
00:32:27 than just a computation.
00:32:30 Do you think all of it, the whole shebangs,
00:32:32 can be a computation?
00:32:34 Yeah, I’ve had many fascinating debates
00:32:35 with Sir Roger Penrose,
00:32:37 and obviously he’s famously,
00:32:39 and I read, you know, Emperors of the New Mind
00:32:41 and his books, his classical books,
00:32:45 and they were pretty influential in the 90s.
00:32:47 And he believes that there’s something more,
00:32:50 something quantum that is needed
00:32:53 to explain consciousness in the brain.
00:32:55 I think about what we’re doing actually at DeepMind
00:32:58 and what my career is being,
00:32:59 we’re almost like Turing’s champion.
00:33:01 So we are pushing Turing machines or classical computation
00:33:05 to the limits.
00:33:06 What are the limits of what classical computing can do?
00:33:09 Now, and at the same time,
00:33:11 I’ve also studied neuroscience to see,
00:33:14 and that’s why I did my PhD in,
00:33:15 was to see, also to look at, you know,
00:33:17 is there anything quantum in the brain
00:33:19 from a neuroscience or biological perspective?
00:33:21 And so far, I think most neuroscientists
00:33:24 and most mainstream biologists and neuroscientists
00:33:26 would say there’s no evidence of any quantum systems
00:33:29 or effects in the brain.
00:33:30 As far as we can see, it can be mostly explained
00:33:33 by classical theories.
00:33:35 So, and then, so there’s sort of the search
00:33:39 from the biology side.
00:33:40 And then at the same time,
00:33:42 there’s the raising of the water, the bar,
00:33:44 from what classical Turing machines can do.
00:33:48 And, you know, including our new AI systems.
00:33:51 And as you alluded to earlier, you know,
00:33:55 I think AI, especially in the last decade plus,
00:33:57 has been a continual story now of surprising events
00:34:02 and surprising successes,
00:34:03 knocking over one theory after another
00:34:05 of what was thought to be impossible, you know,
00:34:07 from Go to protein folding and so on.
00:34:10 And so I think I would be very hesitant
00:34:14 to bet against how far the universal Turing machine
00:34:19 and classical computation paradigm can go.
00:34:23 And my betting would be that all of,
00:34:26 certainly what’s going on in our brain,
00:34:29 can probably be mimicked or approximated
00:34:32 on a classical machine,
00:34:34 not requiring something metaphysical or quantum.
00:34:38 And we’ll get there with some of the work with AlphaFold,
00:34:41 which I think begins the journey of modeling
00:34:45 this beautiful and complex world of biology.
00:34:48 So you think all the magic of the human mind
00:34:50 comes from this, just a few pounds of mush,
00:34:54 of biological computational mush,
00:34:57 that’s akin to some of the neural networks,
00:35:01 not directly, but in spirit
00:35:03 that DeepMind has been working with.
00:35:06 Well, look, I think it’s, you say it’s a few, you know,
00:35:08 of course it’s, this is the,
00:35:09 I think the biggest miracle of the universe
00:35:11 is that it is just a few pounds of mush in our skulls.
00:35:15 And yet it’s also our brains are the most complex objects
00:35:18 that we know of in the universe.
00:35:20 So there’s something profoundly beautiful
00:35:22 and amazing about our brains.
00:35:23 And I think that it’s an incredibly,
00:35:28 incredible efficient machine.
00:35:30 And it’s, you know, phenomenon basically.
00:35:35 And I think that building AI,
00:35:37 one of the reasons I wanna build AI,
00:35:38 and I’ve always wanted to is,
00:35:40 I think by building an intelligent artifact like AI,
00:35:43 and then comparing it to the human mind,
00:35:46 that will help us unlock the uniqueness
00:35:49 and the true secrets of the mind
00:35:50 that we’ve always wondered about since the dawn of history,
00:35:53 like consciousness, dreaming, creativity, emotions,
00:35:59 what are all these things, right?
00:36:00 We’ve wondered about them since the dawn of humanity.
00:36:04 And I think one of the reasons,
00:36:05 and, you know, I love philosophy and philosophy of mind is,
00:36:08 we found it difficult is there haven’t been the tools
00:36:11 for us to really, other than introspection,
00:36:13 from very clever people in history,
00:36:15 very clever philosophers,
00:36:17 to really investigate this scientifically.
00:36:19 But now suddenly we have a plethora of tools.
00:36:21 Firstly, we have all of the neuroscience tools,
00:36:23 fMRI machines, single cell recording, all of this stuff,
00:36:25 but we also have the ability, computers and AI,
00:36:29 to build intelligent systems.
00:36:31 So I think that, you know,
00:36:34 I think it is amazing what the human mind does.
00:36:37 And I’m kind of in awe of it really.
00:36:41 And I think it’s amazing that with our human minds,
00:36:44 we’re able to build things like computers
00:36:46 and actually even, you know,
00:36:48 think and investigate about these questions.
00:36:49 I think that’s also a testament to the human mind.
00:36:52 Yeah.
00:36:53 The universe built the human mind
00:36:56 that now is building computers that help us understand
00:36:59 both the universe and our own human mind.
00:37:01 That’s right.
00:37:02 This is actually it.
00:37:03 I mean, I think that’s one, you know,
00:37:03 one could say we are,
00:37:05 maybe we’re the mechanism by which the universe
00:37:08 is going to try and understand itself.
00:37:09 Yeah.
00:37:10 It’s beautiful.
00:37:13 So let’s go to the basic building blocks of biology
00:37:16 that I think is another angle at which you can start
00:37:20 to understand the human mind, the human body,
00:37:22 which is quite fascinating,
00:37:23 which is from the basic building blocks,
00:37:26 start to simulate, start to model
00:37:28 how from those building blocks,
00:37:30 you can construct bigger and bigger, more complex systems,
00:37:33 maybe one day the entirety of the human biology.
00:37:35 So here’s another problem that thought
00:37:39 to be impossible to solve, which is protein folding.
00:37:42 And Alpha Fold or specifically Alpha Fold 2 did just that.
00:37:48 It solved protein folding.
00:37:50 I think it’s one of the biggest breakthroughs,
00:37:53 certainly in the history of structural biology,
00:37:55 but in general in science,
00:38:00 maybe from a high level, what is it and how does it work?
00:38:04 And then we can ask some fascinating questions after.
00:38:08 Sure.
00:38:09 So maybe to explain it to people not familiar
00:38:12 with protein folding is, you know,
00:38:14 first of all, explain proteins, which is, you know,
00:38:16 proteins are essential to all life.
00:38:18 Every function in your body depends on proteins.
00:38:21 Sometimes they’re called the workhorses of biology.
00:38:23 And if you look into them and I’ve, you know,
00:38:25 obviously as part of Alpha Fold,
00:38:26 I’ve been researching proteins and structural biology
00:38:30 for the last few years, you know,
00:38:31 they’re amazing little bio nano machines proteins.
00:38:34 They’re incredible if you actually watch little videos
00:38:36 of how they work, animations of how they work.
00:38:39 And proteins are specified by their genetic sequence
00:38:42 called the amino acid sequence.
00:38:44 So you can think of it as their genetic makeup.
00:38:47 And then in the body in nature,
00:38:50 they fold up into a 3D structure.
00:38:53 So you can think of it as a string of beads
00:38:55 and then they fold up into a ball.
00:38:57 Now, the key thing is you want to know
00:38:59 what that 3D structure is because the structure,
00:39:02 the 3D structure of a protein is what helps to determine
00:39:06 what does it do, the function it does in your body.
00:39:08 And also if you’re interested in drugs or disease,
00:39:12 you need to understand that 3D structure
00:39:13 because if you want to target something
00:39:15 with a drug compound about to block something
00:39:18 the protein’s doing, you need to understand
00:39:21 where it’s gonna bind on the surface of the protein.
00:39:23 So obviously in order to do that,
00:39:24 you need to understand the 3D structure.
00:39:26 So the structure is mapped to the function.
00:39:28 The structure is mapped to the function
00:39:29 and the structure is obviously somehow specified
00:39:32 by the amino acid sequence.
00:39:34 And that’s the, in essence, the protein folding problem is,
00:39:37 can you just from the amino acid sequence,
00:39:39 the one dimensional string of letters,
00:39:42 can you immediately computationally predict
00:39:45 the 3D structure?
00:39:47 And this has been a grand challenge in biology
00:39:50 for over 50 years.
00:39:51 So I think it was first articulated by Christian Anfinsen,
00:39:54 a Nobel prize winner in 1972,
00:39:57 as part of his Nobel prize winning lecture.
00:39:59 And he just speculated this should be possible
00:40:01 to go from the amino acid sequence to the 3D structure,
00:40:04 but he didn’t say how.
00:40:06 So it’s been described to me as equivalent
00:40:09 to Fermat’s last theorem, but for biology.
00:40:12 You should, as somebody that very well might win
00:40:15 the Nobel prize in the future.
00:40:16 But outside of that, you should do more
00:40:19 of that kind of thing.
00:40:20 In the margin, just put random things
00:40:22 that will take like 200 years to solve.
00:40:24 Set people off for 200 years.
00:40:26 It should be possible.
00:40:27 And just don’t give any details.
00:40:29 Exactly.
00:40:29 I think everyone exactly should be,
00:40:31 I’ll have to remember that for future.
00:40:33 So yeah, so he set off, you know,
00:40:34 with this one throwaway remark, just like Fermat,
00:40:37 you know, he set off this whole 50 year field really
00:40:42 of computational biology.
00:40:44 And they had, you know, they got stuck.
00:40:46 They hadn’t really got very far with doing this.
00:40:48 And until now, until AlphaFold came along,
00:40:52 this is done experimentally, right?
00:40:54 Very painstakingly.
00:40:55 So the rule of thumb is, and you have to like
00:40:57 crystallize the protein, which is really difficult.
00:40:59 Some proteins can’t be crystallized like membrane proteins.
00:41:03 And then you have to use very expensive electron microscopes
00:41:05 or X ray crystallography machines.
00:41:08 Really painstaking work to get the 3D structure
00:41:10 and visualize the 3D structure.
00:41:12 So the rule of thumb in experimental biology
00:41:14 is that it takes one PhD student,
00:41:16 their entire PhD to do one protein.
00:41:20 And with AlphaFold 2, we were able to predict
00:41:23 the 3D structure in a matter of seconds.
00:41:26 And so we were, you know, over Christmas,
00:41:28 we did the whole human proteome
00:41:30 or every protein in the human body or 20,000 proteins.
00:41:33 So the human proteomes like the equivalent
00:41:34 of the human genome, but on protein space.
00:41:37 And sort of revolutionized really
00:41:40 what a structural biologist can do.
00:41:43 Because now they don’t have to worry
00:41:45 about these painstaking experimental,
00:41:47 should they put all of that effort in or not?
00:41:49 They can almost just look up the structure
00:41:51 of their proteins like a Google search.
00:41:53 And so there’s a data set on which it’s trained
00:41:56 and how to map this amino acid sequence.
00:41:58 First of all, it’s incredible that a protein,
00:42:00 this little chemical computer is able to do
00:42:02 that computation itself in some kind of distributed way
00:42:05 and do it very quickly.
00:42:07 That’s a weird thing.
00:42:08 And they evolve that way because, you know,
00:42:10 in the beginning, I mean, that’s a great invention,
00:42:13 just the protein itself.
00:42:14 And then there’s, I think, probably a history
00:42:18 of like they evolved to have many of these proteins
00:42:22 and those proteins figure out how to be computers themselves
00:42:26 in such a way that you can create structures
00:42:28 that can interact in complexes with each other
00:42:30 in order to form high level functions.
00:42:32 I mean, it’s a weird system that they figured it out.
00:42:35 Well, for sure.
00:42:36 I mean, you know, maybe we should talk
00:42:37 about the origins of life too,
00:42:39 but proteins themselves, I think are magical
00:42:41 and incredible, as I said, little bio nano machines.
00:42:45 And actually Leventhal, who was another scientist,
00:42:50 a contemporary of Amphinson, he coined this Leventhal,
00:42:55 what became known as Leventhal’s paradox,
00:42:56 which is exactly what you’re saying.
00:42:58 He calculated roughly an average protein,
00:43:01 which is maybe 2000 amino acids base as long,
00:43:05 is can fold in maybe 10 to the power 300
00:43:09 different confirmations.
00:43:11 So there’s 10 to the power 300 different ways
00:43:13 that protein could fold up.
00:43:14 And yet somehow in nature, physics solves this,
00:43:18 solves this in a matter of milliseconds.
00:43:20 So proteins fold up in your body in, you know,
00:43:23 sometimes in fractions of a second.
00:43:25 So physics is somehow solving that search problem.
00:43:29 And just to be clear, in many of these cases,
00:43:31 maybe you can correct me if I’m wrong,
00:43:33 there’s often a unique way for that sequence to form itself.
00:43:37 So among that huge number of possibilities,
00:43:41 it figures out a way how to stably,
00:43:45 in some cases there might be a misfunction, so on,
00:43:47 which leads to a lot of the disorders and stuff like that.
00:43:50 But most of the time it’s a unique mapping
00:43:52 and that unique mapping is not obvious.
00:43:54 No, exactly.
00:43:55 Which is what the problem is.
00:43:57 Exactly, so there’s a unique mapping usually in a healthy,
00:44:00 if it’s healthy, and as you say in disease,
00:44:04 so for example, Alzheimer’s,
00:44:05 one conjecture is that it’s because of misfolded protein,
00:44:09 a protein that folds in the wrong way, amyloid beta protein.
00:44:12 So, and then because it folds in the wrong way,
00:44:14 it gets tangled up, right, in your neurons.
00:44:17 So it’s super important to understand
00:44:20 both healthy functioning and also disease
00:44:23 is to understand, you know, what these things are doing
00:44:26 and how they’re structuring.
00:44:27 Of course, the next step is sometimes proteins change shape
00:44:30 when they interact with something.
00:44:32 So they’re not just static necessarily in biology.
00:44:37 Maybe you can give some interesting,
00:44:39 so beautiful things to you about these early days
00:44:43 of AlphaFold, of solving this problem,
00:44:46 because unlike games, this is real physical systems
00:44:51 that are less amenable to self play type of mechanisms.
00:44:55 Sure.
00:44:56 The size of the data set is smaller
00:44:58 than you might otherwise like,
00:44:59 so you have to be very clever about certain things.
00:45:01 Is there something you could speak to
00:45:04 what was very hard to solve
00:45:06 and what are some beautiful aspects about the solution?
00:45:09 Yeah, I would say AlphaFold is the most complex
00:45:12 and also probably most meaningful system
00:45:14 we’ve built so far.
00:45:15 So it’s been an amazing time actually in the last,
00:45:18 you know, two, three years to see that come through
00:45:20 because as we talked about earlier, you know,
00:45:23 games is what we started on
00:45:25 building things like AlphaGo and AlphaZero,
00:45:27 but really the ultimate goal was to,
00:45:30 not just to crack games,
00:45:31 it was just to build,
00:45:33 use them to bootstrap general learning systems
00:45:35 we could then apply to real world challenges.
00:45:37 Specifically, my passion is scientific challenges
00:45:40 like protein folding.
00:45:41 And then AlphaFold of course
00:45:43 is our first big proof point of that.
00:45:45 And so, you know, in terms of the data
00:45:49 and the amount of innovations that had to go into it,
00:45:50 we, you know, it was like
00:45:52 more than 30 different component algorithms
00:45:54 needed to be put together to crack the protein folding.
00:45:57 I think some of the big innovations were that
00:46:00 kind of building in some hard coded constraints
00:46:04 around physics and evolutionary biology
00:46:07 to constrain sort of things like the bond angles
00:46:11 in the protein and things like that,
00:46:15 a lot, but not to impact the learning system.
00:46:18 So still allowing the system to be able to learn
00:46:21 the physics itself from the examples that we had.
00:46:25 And the examples, as you say,
00:46:26 there are only about 150,000 proteins,
00:46:28 even after 40 years of experimental biology,
00:46:31 only around 150,000 proteins have been,
00:46:33 the structures have been found out about.
00:46:35 So that was our training set,
00:46:37 which is much less than normally we would like to use,
00:46:41 but using various tricks, things like self distillation.
00:46:43 So actually using AlphaFold predictions,
00:46:48 some of the best predictions
00:46:49 that it thought was highly confident in,
00:46:51 we put them back into the training set, right?
00:46:53 To make the training set bigger,
00:46:55 that was critical to AlphaFold working.
00:46:58 So there was actually a huge number
00:47:00 of different innovations like that,
00:47:02 that were required to ultimately crack the problem.
00:47:06 AlphaFold one, what it produced was a distrogram.
00:47:09 So a kind of a matrix of the pairwise distances
00:47:13 between all of the molecules in the protein.
00:47:17 And then there had to be a separate optimization process
00:47:20 to create the 3D structure.
00:47:23 And what we did for AlphaFold two
00:47:25 is make it truly end to end.
00:47:26 So we went straight from the amino acid sequence of bases
00:47:31 to the 3D structure directly
00:47:33 without going through this intermediate step.
00:47:36 And in machine learning, what we’ve always found is
00:47:38 that the more end to end you can make it,
00:47:40 the better the system.
00:47:42 And it’s probably because in the end,
00:47:46 the system’s better at learning what the constraints are
00:47:48 than we are as the human designers of specifying it.
00:47:51 So anytime you can let it flow end to end
00:47:54 and actually just generate what it is
00:47:55 you’re really looking for, in this case, the 3D structure,
00:47:58 you’re better off than having this intermediate step,
00:48:00 which you then have to handcraft the next step for.
00:48:03 So it’s better to let the gradients and the learning
00:48:06 flow all the way through the system from the end point,
00:48:09 the end output you want to the inputs.
00:48:10 So that’s a good way to start on a new problem.
00:48:13 Handcraft a bunch of stuff,
00:48:14 add a bunch of manual constraints
00:48:16 with a small end to end learning piece
00:48:18 or a small learning piece and grow that learning piece
00:48:21 until it consumes the whole thing.
00:48:22 That’s right.
00:48:23 And so you can also see,
00:48:25 this is a bit of a method we’ve developed
00:48:26 over doing many sort of successful alpha,
00:48:29 we call them alpha X projects, right?
00:48:32 And the easiest way to see that is the evolution
00:48:34 of alpha go to alpha zero.
00:48:36 So alpha go was a learning system,
00:48:39 but it was specifically trained to only play go, right?
00:48:42 So, and what we wanted to do with first version of alpha go
00:48:45 is just get to world champion performance
00:48:47 no matter how we did it, right?
00:48:49 And then of course, alpha go zero,
00:48:51 we remove the need to use human games as a starting point,
00:48:55 right?
00:48:56 So it could just play against itself
00:48:57 from random starting point from the beginning.
00:49:00 So that removed the need for human knowledge about go.
00:49:03 And then finally alpha zero then generalized it
00:49:05 so that any things we had in there, the system,
00:49:08 including things like symmetry of the go board were removed.
00:49:12 So the alpha zero could play from scratch
00:49:14 any two player game and then mu zero,
00:49:16 which is the final, our latest version
00:49:18 of that set of things was then extending it
00:49:20 so that you didn’t even have to give it
00:49:22 the rules of the game.
00:49:23 It would learn that for itself.
00:49:24 So it could also deal with computer games
00:49:26 as well as board games.
00:49:27 So that line of alpha go, alpha go zero, alpha zero,
00:49:30 mu zero, that’s the full trajectory
00:49:33 of what you can take from imitation learning
00:49:37 to full self supervised learning.
00:49:40 Yeah, exactly.
00:49:41 And learning the entire structure
00:49:44 of the environment you’re put in from scratch, right?
00:49:47 And bootstrapping it through self play yourself.
00:49:51 But the thing is it would have been impossible, I think,
00:49:53 or very hard for us to build alpha zero
00:49:55 or mu zero first out of the box.
00:49:58 Even psychologically, because you have to believe
00:50:01 in yourself for a very long time.
00:50:03 You’re constantly dealing with doubt
00:50:04 because a lot of people say that it’s impossible.
00:50:06 Exactly, so it’s hard enough just to do go.
00:50:08 As you were saying, everyone thought that was impossible
00:50:10 or at least a decade away from when we did it
00:50:14 back in 2015, 2016.
00:50:17 And so yes, it would have been psychologically
00:50:20 probably very difficult as well as the fact
00:50:22 that of course we learn a lot by building alpha go first.
00:50:26 Right, so I think this is why I call AI
00:50:28 an engineering science.
00:50:29 It’s one of the most fascinating science disciplines,
00:50:32 but it’s also an engineering science in the sense
00:50:34 that unlike natural sciences, the phenomenon you’re studying
00:50:38 doesn’t exist out in nature.
00:50:39 You have to build it first.
00:50:40 So you have to build the artifact first,
00:50:42 and then you can study and pull it apart and how it works.
00:50:46 This is tough to ask you this question
00:50:50 because you probably will say it’s everything,
00:50:51 but let’s try to think through this
00:50:54 because you’re in a very interesting position
00:50:56 where DeepMind is a place of some of the most brilliant
00:50:59 ideas in the history of AI,
00:51:01 but it’s also a place of brilliant engineering.
00:51:05 So how much of solving intelligence,
00:51:08 this big goal for DeepMind,
00:51:09 how much of it is science?
00:51:12 How much is engineering?
00:51:13 So how much is the algorithms?
00:51:14 How much is the data?
00:51:16 How much is the hardware compute infrastructure?
00:51:19 How much is it the software compute infrastructure?
00:51:23 What else is there?
00:51:24 How much is the human infrastructure?
00:51:27 And like just the humans interacting certain kinds of ways
00:51:30 in all the space of all those ideas.
00:51:31 And how much is maybe like philosophy?
00:51:33 How much, what’s the key?
00:51:35 If you were to sort of look back,
00:51:40 like if we go forward 200 years and look back,
00:51:43 what was the key thing that solved intelligence?
00:51:46 Is it the ideas or the engineering?
00:51:47 I think it’s a combination.
00:51:49 First of all, of course,
00:51:49 it’s a combination of all those things,
00:51:51 but the ratios of them changed over time.
00:51:54 So even in the last 12 years,
00:51:57 so we started DeepMind in 2010,
00:51:59 which is hard to imagine now because 2010,
00:52:01 it’s only 12 short years ago,
00:52:03 but nobody was talking about AI.
00:52:05 I don’t know if you remember back to your MIT days,
00:52:07 no one was talking about it.
00:52:08 I did a postdoc at MIT back around then.
00:52:11 And it was sort of thought of as a,
00:52:12 well, look, we know AI doesn’t work.
00:52:14 We tried this hard in the 90s at places like MIT,
00:52:17 mostly using logic systems and old fashioned,
00:52:19 sort of good old fashioned AI, we would call it now.
00:52:22 People like Minsky and Patrick Winston,
00:52:25 and you know all these characters, right?
00:52:26 And used to debate a few of them.
00:52:28 And they used to think I was mad thinking about
00:52:30 that some new advance could be done with learning systems.
00:52:32 And I was actually pleased to hear that
00:52:34 because at least you know you’re on a unique track
00:52:36 at that point, right?
00:52:37 Even if all of your professors are telling you you’re mad.
00:52:41 And of course in industry,
00:52:43 we couldn’t get, it was difficult to get two cents together,
00:52:47 which is hard to imagine now as well,
00:52:48 given that it’s the biggest sort of buzzword in VCs
00:52:51 and fundraisings easy and all these kinds of things today.
00:52:54 So back in 2010, it was very difficult.
00:52:57 And the reason we started then,
00:52:59 and Shane and I used to discuss
00:53:02 what were the sort of founding tenants of DeepMind.
00:53:04 And it was various things.
00:53:06 One was algorithmic advances.
00:53:08 So deep learning, you know,
00:53:09 Jeff Hinton and Co had just sort of invented that
00:53:12 in academia, but no one in industry knew about it.
00:53:15 We love reinforcement learning.
00:53:16 We thought that could be scaled up.
00:53:18 But also understanding about the human brain
00:53:20 had advanced quite a lot in the decade prior
00:53:23 with fMRI machines and other things.
00:53:25 So we could get some good hints about architectures
00:53:28 and algorithms and sort of representations maybe
00:53:32 that the brain uses.
00:53:33 So at a systems level, not at a implementation level.
00:53:37 And then the other big things were compute and GPUs, right?
00:53:41 So we could see a compute was going to be really useful
00:53:44 and had got to a place where it become commoditized
00:53:46 mostly through the games industry
00:53:48 and that could be taken advantage of.
00:53:50 And then the final thing was also mathematical
00:53:52 and theoretical definitions of intelligence.
00:53:54 So things like AIXI, AIXE,
00:53:57 which Shane worked on with his supervisor, Marcus Hutter,
00:54:00 which is this sort of theoretical proof really
00:54:03 of universal intelligence,
00:54:05 which is actually a reinforcement learning system
00:54:08 in the limit.
00:54:08 I mean, it assumes infinite compute and infinite memory
00:54:10 in the way, you know, like a Turing machine proves.
00:54:12 But I was also waiting to see something like that too,
00:54:15 to, you know, like Turing machines and computation theory
00:54:19 that people like Turing and Shannon came up with
00:54:21 underpins modern computer science.
00:54:24 You know, I was waiting for a theory like that
00:54:26 to sort of underpin AGI research.
00:54:28 So when I, you know, met Shane
00:54:30 and saw he was working on something like that,
00:54:32 you know, that to me was a sort of final piece
00:54:33 of the jigsaw.
00:54:34 So in the early days, I would say that ideas
00:54:38 were the most important.
00:54:40 You know, for us, it was deep reinforcement learning,
00:54:42 scaling up deep learning.
00:54:44 Of course, we’ve seen transformers.
00:54:46 So huge leaps, I would say, you know, three or four
00:54:48 from, if you think from 2010 till now,
00:54:51 huge evolutions, things like AlphaGo.
00:54:53 And maybe there’s a few more still needed.
00:54:57 But as we get closer to AI, AGI,
00:55:02 I think engineering becomes more and more important
00:55:04 and data because scale and of course the recent,
00:55:07 you know, results of GPT3 and all the big language models
00:55:10 and large models, including our ones,
00:55:12 has shown that scale and large models
00:55:16 are clearly gonna be a necessary,
00:55:18 but perhaps not sufficient part of an AGI solution.
00:55:21 And throughout that, like you said,
00:55:24 and I’d like to give you a big thank you.
00:55:26 You’re one of the pioneers in this is sticking by ideas
00:55:30 like reinforcement learning, that this can actually work
00:55:34 given actually limited success in the past.
00:55:38 And also, which we still don’t know,
00:55:41 but proudly having the best researchers in the world
00:55:46 and talking about solving intelligence.
00:55:49 So talking about whatever you call it,
00:55:50 AGI or something like this, speaking of MIT,
00:55:54 that’s just something you wouldn’t bring up.
00:55:57 Not maybe you did in like 40, 50 years ago,
00:56:03 but that was, AI was a place where you do tinkering,
00:56:09 very small scale, not very ambitious projects.
00:56:12 And maybe the biggest ambitious projects
00:56:16 were in the space of robotics
00:56:17 and doing like the DARPA challenge.
00:56:19 But the task of solving intelligence and believing you can,
00:56:23 that’s really, really powerful.
00:56:24 So in order for engineering to do its work,
00:56:27 to have great engineers, build great systems,
00:56:30 you have to have that belief,
00:56:32 that threads throughout the whole thing
00:56:33 that you can actually solve
00:56:35 some of these impossible challenges.
00:56:36 Yeah, that’s right.
00:56:37 And back in 2010, our mission statement and still is today,
00:56:42 it was used to be solving step one, solve intelligence,
00:56:45 step two, use it to solve everything else.
00:56:47 So if you can imagine pitching that to a VC in 2010,
00:56:51 the kind of looks we got,
00:56:52 we managed to find a few kooky people to back us,
00:56:55 but it was tricky.
00:56:57 And it got to the point where we wouldn’t mention it
00:57:00 to any of our professors because they would just eye roll
00:57:03 and think we committed career suicide.
00:57:05 And so it was, there’s a lot of things that we had to do,
00:57:10 but we always believed it.
00:57:11 And one reason, by the way,
00:57:13 one reason I’ve always believed in reinforcement learning
00:57:16 is that if you look at neuroscience,
00:57:19 that is the way that the primate brain learns.
00:57:22 One of the main mechanisms is the dopamine system
00:57:24 implements some form of TD learning.
00:57:26 It was a very famous result in the late 90s
00:57:29 where they saw this in monkeys
00:57:31 and as a propagating prediction error.
00:57:34 So again, in the limit,
00:57:36 this is what I think you can use neuroscience for is,
00:57:39 at mathematics, when you’re doing something as ambitious
00:57:43 as trying to solve intelligence
00:57:44 and it’s blue sky research, no one knows how to do it,
00:57:47 you need to use any evidence
00:57:50 or any source of information you can
00:57:52 to help guide you in the right direction
00:57:54 or give you confidence you’re going in the right direction.
00:57:56 So that was one reason we pushed so hard on that.
00:57:59 And just going back to your earlier question
00:58:01 about organization, the other big thing
00:58:04 that I think we innovated with at DeepMind
00:58:06 to encourage invention and innovation
00:58:10 was the multidisciplinary organization we built
00:58:12 and we still have today.
00:58:14 So DeepMind originally was a confluence
00:58:16 of the most cutting edge knowledge in neuroscience
00:58:19 with machine learning, engineering and mathematics, right?
00:58:22 And gaming.
00:58:24 And then since then we’ve built that out even further.
00:58:26 So we have philosophers here and ethicists,
00:58:30 but also other types of scientists, physicists and so on.
00:58:33 And that’s what brings together,
00:58:35 I tried to build a sort of new type of Bell Labs,
00:58:38 but in its golden era, right?
00:58:41 And a new expression of that to try and foster
00:58:45 this incredible sort of innovation machine.
00:58:48 So talking about the humans in the machine,
00:58:50 DeepMind itself is a learning machine
00:58:53 with lots of amazing human minds in it
00:58:55 coming together to try and build these learning systems.
00:59:00 If we return to the big ambitious dream of AlphaFold,
00:59:04 that may be the early steps on a very long journey
00:59:08 in biology, do you think the same kind of approach
00:59:14 can use to predict the structure and function
00:59:16 of more complex biological systems?
00:59:18 So multi protein interaction,
00:59:21 and then, I mean, you can go out from there,
00:59:24 just simulating bigger and bigger systems
00:59:26 that eventually simulate something like the human brain
00:59:29 or the human body, just the big mush,
00:59:32 the mess of the beautiful, resilient mess of biology.
00:59:36 Do you see that as a long term vision?
00:59:39 I do, and I think, if you think about
00:59:42 what are the top things I wanted to apply AI to
00:59:45 once we had powerful enough systems,
00:59:47 biology and curing diseases and understanding biology
00:59:52 was right up there, top of my list.
00:59:54 That’s one of the reasons I personally pushed that myself
00:59:56 and with AlphaFold, but I think AlphaFold,
01:00:00 amazing as it is, is just the beginning.
01:00:03 And I hope it’s evidence of what could be done
01:00:07 with computational methods.
01:00:08 So AlphaFold solved this huge problem
01:00:12 of the structure of proteins, but biology is dynamic.
01:00:15 So really what I imagine from here,
01:00:16 and we’re working on all these things now,
01:00:18 is protein, protein interaction, protein ligand binding,
01:00:23 so reacting with molecules,
01:00:25 then you wanna build up to pathways,
01:00:27 and then eventually a virtual cell.
01:00:30 That’s my dream, maybe in the next 10 years.
01:00:32 And I’ve been talking actually
01:00:33 to a lot of biologists, friends of mine,
01:00:35 Paul Nurse, who runs the Crick Institute,
01:00:36 amazing biologists, Nobel Prize winning biologists.
01:00:39 We’ve been discussing for 20 years now, virtual cells.
01:00:42 Could you build a virtual simulation of a cell?
01:00:44 And if you could, that would be incredible
01:00:46 for biology and disease discovery,
01:00:48 because you could do loads of experiments
01:00:49 on the virtual cell, and then only at the last stage,
01:00:52 validate it in the wet lab.
01:00:53 So you could, in terms of the search space
01:00:56 of discovering new drugs, it takes 10 years roughly
01:00:59 to go from identifying a target,
01:01:03 to having a drug candidate.
01:01:06 Maybe that could be shortened by an order of magnitude,
01:01:09 if you could do most of that work in silico.
01:01:13 So in order to get to a virtual cell,
01:01:15 we have to build up understanding
01:01:18 of different parts of biology and the interactions.
01:01:20 And so every few years we talk about this,
01:01:24 I talked about this with Paul.
01:01:25 And then finally, last year after AlphaFold,
01:01:27 I said, now’s the time we can finally go for it.
01:01:30 And AlphaFold is the first proof point
01:01:32 that this might be possible.
01:01:33 And he’s very excited, and we have some collaborations
01:01:35 with his lab, they’re just across the road actually
01:01:38 from us, it’s wonderful being here in King’s Cross
01:01:40 with the Crick Institute across the road.
01:01:42 And I think the next steps,
01:01:45 I think there’s gonna be some amazing advances
01:01:48 in biology built on top of things like AlphaFold.
01:01:50 We’re already seeing that with the community doing that
01:01:53 after we’ve open sourced it and released it.
01:01:56 And I often say that I think if you think of mathematics
01:02:02 is the perfect description language for physics,
01:02:05 I think AI might be end up being
01:02:06 the perfect description language for biology
01:02:09 because biology is so messy, it’s so emergent,
01:02:13 so dynamic and complex.
01:02:15 I think I find it very hard to believe
01:02:16 we’ll ever get to something as elegant
01:02:18 as Newton’s laws of motions to describe a cell, right?
01:02:21 It’s just too complicated.
01:02:23 So I think AI is the right tool for that.
01:02:26 So you have to start at the basic building blocks
01:02:29 and use AI to run the simulation
01:02:31 for all those building blocks.
01:02:32 So have a very strong way to do prediction
01:02:36 of what given these building blocks,
01:02:37 what kind of biology, how the function
01:02:40 and the evolution of that biological system.
01:02:43 It’s almost like a cellular automata,
01:02:45 you have to run it, you can’t analyze it from a high level.
01:02:47 You have to take the basic ingredients,
01:02:49 figure out the rules and let it run.
01:02:51 But in this case, the rules are very difficult
01:02:53 to figure out, you have to learn them.
01:02:56 That’s exactly it.
01:02:57 So the biology is too complicated to figure out the rules.
01:03:00 It’s too emergent, too dynamic,
01:03:03 say compared to a physics system,
01:03:05 like the motion of a planet, right?
01:03:07 And so you have to learn the rules
01:03:09 and that’s exactly the type of systems that we’re building.
01:03:11 So you mentioned you’ve open sourced AlphaFold
01:03:14 and even the data involved.
01:03:16 To me personally, also really happy
01:03:20 and a big thank you for open sourcing Mojoko,
01:03:23 the physics simulation engine that’s often used
01:03:27 for robotics research and so on.
01:03:29 So I think that’s a pretty gangster move.
01:03:31 So what’s the, I mean, very few companies
01:03:37 or people do that kind of thing.
01:03:39 What’s the philosophy behind that?
01:03:41 You know, it’s a case by case basis.
01:03:42 And in both of those cases,
01:03:44 we felt that was the maximum benefit to humanity to do that.
01:03:47 And the scientific community, in one case,
01:03:50 the robotics physics community with Mojoko, so.
01:03:53 We purchased it.
01:03:54 We purchased it for, yes,
01:03:55 we purchased it for the express principle to open source it.
01:03:58 So, you know, I hope people appreciate that.
01:04:02 It’s great to hear that you do.
01:04:04 And then the second thing was,
01:04:05 and mostly we did it because the person building it
01:04:08 was not able to cope with supporting it anymore
01:04:11 because it got too big for him.
01:04:13 He’s an amazing professor who built it in the first place.
01:04:16 So we helped him out with that.
01:04:18 And then with AlphaFold is even bigger, I would say.
01:04:20 And I think in that case,
01:04:21 we decided that there were so many downstream applications
01:04:25 of AlphaFold that we couldn’t possibly even imagine
01:04:29 what they all were.
01:04:30 So the best way to accelerate drug discovery
01:04:34 and also fundamental research would be to give all
01:04:38 that data away and the system itself.
01:04:43 You know, it’s been so gratifying to see
01:04:45 what people have done that within just one year,
01:04:47 which is a short amount of time in science.
01:04:49 And it’s been used by over 500,000 researchers have used it.
01:04:54 We think that’s almost every biologist in the world.
01:04:56 I think there’s roughly 500,000 biologists in the world,
01:04:58 professional biologists,
01:05:00 have used it to look at their proteins of interest.
01:05:04 We’ve seen amazing fundamental research done.
01:05:06 So a couple of weeks ago, front cover,
01:05:09 there was a whole special issue of science,
01:05:10 including the front cover,
01:05:12 which had the nuclear pore complex on it,
01:05:14 which is one of the biggest proteins in the body.
01:05:15 The nuclear pore complex is a protein that governs
01:05:18 all the nutrients going in and out of your cell nucleus.
01:05:21 So they’re like little gateways that open and close
01:05:24 to let things go in and out of your cell nucleus.
01:05:27 So they’re really important, but they’re huge
01:05:29 because they’re massive donut ring shaped things.
01:05:31 And they’ve been looking to try and figure out
01:05:33 that structure for decades.
01:05:34 And they have lots of experimental data,
01:05:37 but it’s too low resolution, there’s bits missing.
01:05:39 And they were able to, like a giant Lego jigsaw puzzle,
01:05:43 use alpha fold predictions plus experimental data
01:05:46 and combined those two independent sources of information,
01:05:49 actually four different groups around the world
01:05:51 were able to put it together more or less simultaneously
01:05:54 using alpha fold predictions.
01:05:56 So that’s been amazing to see.
01:05:57 And pretty much every pharma company,
01:05:59 every drug company executive I’ve spoken to
01:06:01 has said that their teams are using alpha fold
01:06:03 to accelerate whatever drugs they’re trying to discover.
01:06:08 So I think the knock on effect has been enormous
01:06:11 in terms of the impact that alpha fold has made.
01:06:15 And it’s probably bringing in, it’s creating biologists,
01:06:17 it’s bringing more people into the field,
01:06:20 both on the excitement and both on the technical skills
01:06:23 involved in, it’s almost like a gateway drug to biology.
01:06:28 Yes, it is.
01:06:29 And to get more computational people involved too, hopefully.
01:06:32 And I think for us, the next stage, as I said,
01:06:35 in future we have to have other considerations too.
01:06:37 We’re building on top of alpha fold
01:06:39 and these other ideas I discussed with you
01:06:41 about protein interactions and genomics and other things.
01:06:44 And not everything will be open source.
01:06:46 Some of it we’ll do commercially
01:06:48 because that will be the best way
01:06:49 to actually get the most resources and impact behind it.
01:06:51 In other ways, some other projects
01:06:53 we’ll do nonprofit style.
01:06:55 And also we have to consider for future things as well,
01:06:58 safety and ethics as well.
01:06:59 Like synthetic biology, there is dual use.
01:07:03 And we have to think about that as well.
01:07:05 With alpha fold, we consulted with 30 different bioethicists
01:07:08 and other people expert in this field
01:07:10 to make sure it was safe before we released it.
01:07:13 So there’ll be other considerations in future.
01:07:15 But for right now, I think alpha fold
01:07:17 is a kind of a gift from us to the scientific community.
01:07:20 So I’m pretty sure that something like alpha fold
01:07:25 will be part of Nobel prizes in the future.
01:07:29 But us humans, of course,
01:07:30 are horrible with credit assignment.
01:07:32 So we’ll of course give it to the humans.
01:07:35 Do you think there will be a day
01:07:37 when AI system can’t be denied
01:07:42 that it earned that Nobel prize?
01:07:45 Do you think we will see that in 21st century?
01:07:47 It depends what type of AIs we end up building, right?
01:07:50 Whether they’re goal seeking agents
01:07:53 who specifies the goals, who comes up with the hypotheses,
01:07:57 who determines which problems to tackle, right?
01:08:00 So I think…
01:08:01 And tweets about it, announcement of the results.
01:08:02 Yes, and tweets about results exactly as part of it.
01:08:05 So I think right now, of course,
01:08:07 it’s amazing human ingenuity that’s behind these systems.
01:08:12 And then the system, in my opinion, is just a tool.
01:08:15 Be a bit like saying with Galileo and his telescope,
01:08:18 the ingenuity that the credit should go to the telescope.
01:08:21 I mean, it’s clearly Galileo building the tool
01:08:23 which he then uses.
01:08:25 So I still see that in the same way today,
01:08:27 even though these tools learn for themselves.
01:08:30 There, I think of things like alpha fold
01:08:32 and the things we’re building as the ultimate tools
01:08:35 for science and for acquiring new knowledge
01:08:38 to help us as scientists acquire new knowledge.
01:08:41 I think one day there will come a point
01:08:43 where an AI system may solve
01:08:46 or come up with something like general relativity
01:08:48 of its own bat, not just by averaging everything
01:08:52 on the internet or averaging everything on PubMed,
01:08:55 although that would be interesting to see
01:08:56 what that would come up with.
01:08:58 So that to me is a bit like our earlier debate
01:09:00 about creativity, you know, inventing go
01:09:03 rather than just coming up with a good go move.
01:09:06 And so I think solving, I think to, you know,
01:09:10 if we wanted to give it the credit
01:09:11 of like a Nobel type of thing,
01:09:13 then it would need to invent go
01:09:15 and sort of invent that new conjecture out of the blue
01:09:19 rather than being specified by the human scientists
01:09:22 or the human creators.
01:09:23 So I think right now it’s definitely just a tool.
01:09:26 Although it is interesting how far you get
01:09:27 by averaging everything on the internet, like you said,
01:09:29 because, you know, a lot of people do see science
01:09:33 as you’re always standing on the shoulders of giants.
01:09:35 And the question is how much are you really reaching
01:09:40 up above the shoulders of giants?
01:09:42 Maybe it’s just simulating different kinds
01:09:44 of results of the past with ultimately this new perspective
01:09:49 that gives you this breakthrough idea.
01:09:51 But that idea may not be novel in the way
01:09:54 that it can’t be already discovered on the internet.
01:09:56 Maybe the Nobel prizes of the next 100 years
01:10:00 are already all there on the internet to be discovered.
01:10:03 They could be, they could be.
01:10:04 I mean, I think this is one of the big mysteries,
01:10:08 I think is that I, first of all,
01:10:11 I believe a lot of the big new breakthroughs
01:10:13 that are gonna come in the next few decades
01:10:15 and even in the last decade are gonna come
01:10:17 at the intersection between different subject areas
01:10:20 where there’ll be some new connection that’s found
01:10:23 between what seemingly were disparate areas.
01:10:26 And one can even think of DeepMind, as I said earlier,
01:10:28 as a sort of interdisciplinary between neuroscience ideas
01:10:31 and AI engineering ideas originally.
01:10:35 And so I think there’s that.
01:10:37 And then one of the things we can’t imagine today is,
01:10:40 and one of the reasons I think people,
01:10:41 we were so surprised by how well large models worked
01:10:44 is that actually it’s very hard for our human minds,
01:10:47 our limited human minds to understand
01:10:49 what it would be like to read the whole internet, right?
01:10:52 I think we can do a thought experiment
01:10:53 and I used to do this of like,
01:10:54 well, what if I read the whole of Wikipedia?
01:10:57 What would I know?
01:10:58 And I think our minds can just about comprehend
01:11:00 maybe what that would be like,
01:11:01 but the whole internet is beyond comprehension.
01:11:04 So I think we just don’t understand what it would be like
01:11:07 to be able to hold all of that in mind potentially, right?
01:11:10 And then active at once,
01:11:12 and then maybe what are the connections
01:11:14 that are available there?
01:11:15 So I think no doubt there are huge things
01:11:17 to be discovered just like that.
01:11:19 But I do think there is this other type of creativity
01:11:22 of true spark of new knowledge, new idea,
01:11:25 never thought before about,
01:11:26 can’t be averaged from things that are known,
01:11:29 that really, of course, everything come,
01:11:32 nobody creates in a vacuum,
01:11:33 so there must be clues somewhere,
01:11:35 but just a unique way of putting those things together.
01:11:38 I think some of the greatest scientists in history
01:11:40 have displayed that I would say,
01:11:42 although it’s very hard to know going back to their time,
01:11:45 what was exactly known when they came up with those things.
01:11:48 Although you’re making me really think because just a thought
01:11:52 experiment of deeply knowing a hundred Wikipedia pages.
01:11:57 I don’t think I can,
01:11:59 I’ve been really impressed by Wikipedia for technical topics.
01:12:03 So if you know a hundred pages or a thousand pages,
01:12:07 I don’t think we can truly comprehend
01:12:10 what kind of intelligence that is.
01:12:13 That’s a pretty powerful intelligence.
01:12:14 If you know how to use that
01:12:16 and integrate that information correctly,
01:12:18 I think you can go really far.
01:12:20 You can probably construct thought experiments
01:12:22 based on that, like simulate different ideas.
01:12:25 So if this is true, let me run this thought experiment
01:12:28 that maybe this is true.
01:12:30 It’s not really invention.
01:12:31 It’s like just taking literally the knowledge
01:12:34 and using it to construct the very basic simulation
01:12:37 of the world.
01:12:38 I mean, some argue it’s romantic in part,
01:12:40 but Einstein would do the same kind of things
01:12:42 with a thought experiment.
01:12:43 Yeah, one could imagine doing that systematically
01:12:46 across millions of Wikipedia pages,
01:12:48 plus PubMed, all these things.
01:12:50 I think there are many, many things to be discovered
01:12:53 like that that are hugely useful.
01:12:55 You could imagine,
01:12:56 and I want us to do some of these things in material science
01:12:58 like room temperature superconductors
01:13:00 is something on my list one day.
01:13:01 I’d like to have an AI system to help build
01:13:05 better optimized batteries,
01:13:06 all of these sort of mechanical things.
01:13:09 I think a systematic sort of search
01:13:11 could be guided by a model,
01:13:14 could be extremely powerful.
01:13:17 So speaking of which,
01:13:18 you have a paper on nuclear fusion,
01:13:21 magnetic control of tachymic plasmas
01:13:23 through deep reinforcement learning.
01:13:24 So you’re seeking to solve nuclear fusion with deep RL.
01:13:29 So it’s doing control of high temperature plasmas.
01:13:31 Can you explain this work
01:13:33 and can AI eventually solve nuclear fusion?
01:13:37 It’s been very fun last year or two and very productive
01:13:40 because we’ve been taking off a lot of my dream projects,
01:13:43 if you like, of things that I’ve collected
01:13:44 over the years of areas of science
01:13:46 that I would like to,
01:13:48 I think could be very transformative if we helped accelerate
01:13:51 and really interesting problems,
01:13:53 scientific challenges in of themselves.
01:13:55 So this is energy.
01:13:57 So energy, yes, exactly.
01:13:58 So energy and climate.
01:13:59 So we talked about disease and biology
01:14:01 as being one of the biggest places I think AI can help with.
01:14:04 I think energy and climate is another one.
01:14:07 So maybe they would be my top two.
01:14:09 And fusion is one area I think AI can help with.
01:14:12 Now, fusion has many challenges,
01:14:15 mostly physics and material science
01:14:17 and engineering challenges as well
01:14:18 to build these massive fusion reactors
01:14:20 and contain the plasma.
01:14:21 And what we try to do,
01:14:22 and whenever we go into a new field to apply our systems,
01:14:26 is we look for, we talk to domain experts.
01:14:29 We try and find the best people in the world
01:14:30 to collaborate with.
01:14:33 In this case, in fusion,
01:14:34 we collaborated with EPFL in Switzerland,
01:14:36 the Swiss Technical Institute, who are amazing.
01:14:38 They have a test reactor.
01:14:39 They were willing to let us use,
01:14:41 which I double checked with the team
01:14:43 we were gonna use carefully and safely.
01:14:46 I was impressed they managed to persuade them
01:14:47 to let us use it.
01:14:49 And it’s an amazing test reactor they have there.
01:14:53 And they try all sorts of pretty crazy experiments on it.
01:14:57 And what we tend to look at is,
01:14:59 if we go into a new domain like fusion,
01:15:01 what are all the bottleneck problems?
01:15:04 Like thinking from first principles,
01:15:05 what are all the bottleneck problems
01:15:07 that are still stopping fusion working today?
01:15:09 And then we look at, we get a fusion expert to tell us,
01:15:12 and then we look at those bottlenecks
01:15:13 and we look at the ones,
01:15:14 which ones are amenable to our AI methods today, right?
01:15:18 And would be interesting from a research perspective,
01:15:22 from our point of view, from an AI point of view,
01:15:24 and that would address one of their bottlenecks.
01:15:26 And in this case, plasma control was perfect.
01:15:29 So, the plasma, it’s a million degrees Celsius,
01:15:32 something like that, it’s hotter than the sun.
01:15:34 And there’s obviously no material that can contain it.
01:15:37 So, they have to be containing these magnetic,
01:15:39 very powerful and superconducting magnetic fields.
01:15:42 But the problem is plasma,
01:15:43 it’s pretty unstable as you imagine,
01:15:45 you’re kind of holding a mini sun, mini star in a reactor.
01:15:49 So, you kind of want to predict ahead of time,
01:15:52 what the plasma is gonna do.
01:15:54 So, you can move the magnetic field
01:15:56 within a few milliseconds,
01:15:58 to basically contain what it’s gonna do next.
01:16:00 So, it seems like a perfect problem if you think of it
01:16:03 for like a reinforcement learning prediction problem.
01:16:06 So, you got controller, you’re gonna move the magnetic field.
01:16:09 And until we came along, they were doing it
01:16:12 with traditional operational research type of controllers,
01:16:16 which are kind of handcrafted.
01:16:18 And the problem is, of course,
01:16:19 they can’t react in the moment
01:16:20 to something the plasma is doing,
01:16:21 they have to be hard coded.
01:16:23 And again, knowing that that’s normally our go to solution
01:16:26 is we would like to learn that instead.
01:16:27 And they also had a simulator of these plasma.
01:16:30 So, there were lots of criteria
01:16:31 that matched what we like to use.
01:16:34 So, can AI eventually solve nuclear fusion?
01:16:38 Well, so with this problem,
01:16:39 and we published it in a nature paper last year,
01:16:42 we held the fusion, we held the plasma in a specific shapes.
01:16:46 So, actually, it’s almost like carving the plasma
01:16:48 into different shapes and hold it there
01:16:51 for a record amount of time.
01:16:52 So, that’s one of the problems of fusion sort of solved.
01:16:57 So, have a controller that’s able to,
01:16:59 no matter the shape.
01:17:01 Contain it. Contain it.
01:17:02 Yeah, contain it and hold it in structure.
01:17:04 And there’s different shapes that are better
01:17:05 for the energy productions called droplets and so on.
01:17:10 So, that was huge.
01:17:11 And now we’re looking,
01:17:12 we’re talking to lots of fusion startups
01:17:14 to see what’s the next problem we can tackle
01:17:17 in the fusion area.
01:17:19 So, another fascinating place in a paper titled,
01:17:23 Pushing the Frontiers of Density Functionals
01:17:25 by Solving the Fractional Electron Problem.
01:17:27 So, you’re taking on modeling and simulating
01:17:30 the quantum mechanical behavior of electrons.
01:17:33 Yes.
01:17:36 Can you explain this work and can AI model
01:17:39 and simulate arbitrary quantum mechanical systems
01:17:41 in the future?
01:17:42 Yeah, so this is another problem I’ve had my eye on
01:17:44 for a decade or more,
01:17:47 which is sort of simulating the properties of electrons.
01:17:51 If you can do that, you can basically describe
01:17:54 how elements and materials and substances work.
01:17:58 So, it’s kind of like fundamental
01:18:00 if you want to advance material science.
01:18:02 And we have Schrodinger’s equation
01:18:05 and then we have approximations
01:18:06 to that density functional theory.
01:18:08 These things are famous.
01:18:10 And people try and write approximations
01:18:13 to these functionals and kind of come up
01:18:17 with descriptions of the electron clouds,
01:18:19 where they’re going to go,
01:18:20 how they’re going to interact
01:18:22 when you put two elements together.
01:18:24 And what we try to do is learn a simulation,
01:18:27 learn a functional that will describe more chemistry,
01:18:30 types of chemistry.
01:18:31 So, until now, you can run expensive simulations,
01:18:35 but then you can only simulate very small molecules,
01:18:38 very simple molecules.
01:18:40 We would like to simulate large materials.
01:18:43 And so, today there’s no way of doing that.
01:18:45 And we’re building up towards building functionals
01:18:48 that approximate Schrodinger’s equation
01:18:51 and then allow you to describe what the electrons are doing.
01:18:55 And all material sort of science
01:18:57 and material properties are governed by the electrons
01:18:59 and how they interact.
01:19:01 So, have a good summarization of the simulation
01:19:05 through the functional,
01:19:08 but one that is still close
01:19:11 to what the actual simulation would come out with.
01:19:13 So, how difficult is that task?
01:19:16 What’s involved in that task?
01:19:17 Is it running those complicated simulations
01:19:20 and learning the task of mapping
01:19:23 from the initial conditions
01:19:24 and the parameters of the simulation,
01:19:26 learning what the functional would be?
01:19:27 Yeah.
01:19:28 So, it’s pretty tricky.
01:19:29 And we’ve done it with,
01:19:31 the nice thing is we can run a lot of the simulations,
01:19:35 the molecular dynamic simulations on our compute clusters.
01:19:39 And so, that generates a lot of data.
01:19:40 So, in this case, the data is generated.
01:19:42 So, we like those sort of systems and that’s why we use games.
01:19:45 It’s simulated, generated data.
01:19:48 And we can kind of create as much of it as we want, really.
01:19:51 And just let’s leave some,
01:19:53 if any computers are free in the cloud,
01:19:55 we just run, we run some of these calculations, right?
01:19:57 Compute cluster calculation.
01:19:59 I like how the free compute time
01:20:01 is used up on quantum mechanics.
01:20:02 Yeah, quantum mechanics, exactly.
01:20:03 Simulations and protein simulations and other things.
01:20:06 And so, when you’re not searching on YouTube
01:20:09 for free video, cat videos,
01:20:11 we’re using those computers usefully in quantum chemistry.
01:20:13 It’s the idea.
01:20:14 Finally.
01:20:15 And putting them to good use.
01:20:17 And then, yeah, and then all of that computational data
01:20:19 that’s generated,
01:20:20 we can then try and learn the functionals from that,
01:20:23 which of course are way more efficient
01:20:25 once we learn the functional
01:20:27 than running those simulations would be.
01:20:30 Do you think one day AI may allow us
01:20:33 to do something like basically crack open physics?
01:20:36 So, do something like travel faster than the speed of light?
01:20:39 My ultimate aim is always being with AI
01:20:41 is the reason I am personally working on AI
01:20:45 for my whole life, it was to build a tool
01:20:48 to help us understand the universe.
01:20:50 So, I wanted to, and that means physics, really,
01:20:53 and the nature of reality.
01:20:54 So, I don’t think we have systems
01:20:58 that are capable of doing that yet,
01:20:59 but when we get towards AGI,
01:21:01 I think that’s one of the first things
01:21:02 I think we should apply AGI to.
01:21:05 I would like to test the limits of physics
01:21:07 and our knowledge of physics.
01:21:08 There’s so many things we don’t know.
01:21:10 This is one thing I find fascinating about science.
01:21:12 And as a huge proponent of the scientific method
01:21:15 as being one of the greatest ideas humanity has ever had
01:21:17 and allowed us to progress with our knowledge,
01:21:20 but I think as a true scientist,
01:21:22 I think what you find is the more you find out,
01:21:25 the more you realize we don’t know.
01:21:27 And I always think that it’s surprising
01:21:29 that more people aren’t troubled.
01:21:31 Every night I think about all these things
01:21:34 we interact with all the time,
01:21:35 that we have no idea how they work.
01:21:36 Time, consciousness, gravity, life, we can’t,
01:21:41 I mean, these are all the fundamental things of nature.
01:21:43 I think the way we don’t really know what they are.
01:21:47 To live life, we pin certain assumptions on them
01:21:51 and kind of treat our assumptions as if they’re a fact.
01:21:55 That allows us to sort of box them off somehow.
01:21:57 Yeah, box them off somehow.
01:21:59 But the reality is when you think of time,
01:22:02 you should remind yourself,
01:22:03 you should take it off the shelf
01:22:06 and realize like, no, we have a bunch of assumptions.
01:22:09 There’s still a lot of, there’s even now a lot of debate.
01:22:11 There’s a lot of uncertainty about exactly what is time.
01:22:15 Is there an error of time?
01:22:17 You know, there’s a lot of fundamental questions
01:22:19 that you can’t just make assumptions about.
01:22:21 And maybe AI allows you to not put anything on the shelf.
01:22:27 Yeah.
01:22:28 Not make any hard assumptions
01:22:30 and really open it up and see what’s.
01:22:32 Exactly, I think we should be truly open minded about that.
01:22:34 And exactly that, not be dogmatic to a particular theory.
01:22:39 It’ll also allow us to build better tools,
01:22:41 experimental tools eventually,
01:22:44 that can then test certain theories
01:22:46 that may not be testable today.
01:22:48 Things about like what we spoke about at the beginning
01:22:51 about the computational nature of the universe.
01:22:53 How one might, if that was true,
01:22:55 how one might go about testing that, right?
01:22:57 And how much, you know, there are people
01:22:59 who’ve conjectured people like Scott Aaronson and others
01:23:02 about, you know, how much information
01:23:04 can a specific plank unit of space and time
01:23:08 contain, right?
01:23:09 So one might be able to think about testing those ideas
01:23:11 if you had AI helping you build
01:23:15 some new exquisite experimental tools.
01:23:19 This is what I imagine that, you know,
01:23:20 many decades from now we’ll be able to do.
01:23:23 And what kind of questions can be answered
01:23:25 through running a simulation of them?
01:23:28 So there’s a bunch of physics simulations
01:23:30 you can imagine that could be run
01:23:32 in some kind of efficient way,
01:23:35 much like you’re doing in the quantum simulation work.
01:23:40 And perhaps even the origin of life.
01:23:42 So figuring out how going even back
01:23:45 before the work of AlphaFold begins
01:23:47 of how this whole thing emerges from a rock.
01:23:52 Yes.
01:23:53 From a static thing.
01:23:54 What do you think AI will allow us to,
01:23:57 is that something you have your eye on?
01:23:58 It’s trying to understand the origin of life.
01:24:01 First of all, yourself, what do you think,
01:24:06 how the heck did life originate on Earth?
01:24:08 Yeah, well, maybe I’ll come to that in a second,
01:24:11 but I think the ultimate use of AI
01:24:13 is to kind of use it to accelerate science to the maximum.
01:24:18 So I think of it a little bit
01:24:21 like the tree of all knowledge.
01:24:22 If you imagine that’s all the knowledge there is
01:24:24 in the universe to attain.
01:24:25 And we sort of barely scratched the surface of that so far.
01:24:29 And even though we’ve done pretty well
01:24:31 since the enlightenment, right, as humanity.
01:24:34 And I think AI will turbocharge all of that,
01:24:36 like we’ve seen with AlphaFold.
01:24:38 And I want to explore as much of that tree of knowledge
01:24:41 as is possible to do.
01:24:42 And I think that involves AI helping us
01:24:46 with understanding or finding patterns,
01:24:49 but also potentially designing and building new tools,
01:24:52 experimental tools.
01:24:53 So I think that’s all,
01:24:56 and also running simulations and learning simulations,
01:24:58 all of that we’re sort of doing at a baby steps level here.
01:25:05 But I can imagine that in the decades to come
01:25:08 as what’s the full flourishing of that line of thinking.
01:25:12 It’s gonna be truly incredible, I would say.
01:25:15 If I visualized this tree of knowledge,
01:25:17 something tells me that that tree of knowledge for humans
01:25:20 is much smaller in the set of all possible trees
01:25:24 of knowledge, it’s actually quite small
01:25:26 given our cognitive limitations,
01:25:31 limited cognitive capabilities,
01:25:33 that even with the tools we build,
01:25:35 we still won’t be able to understand a lot of things.
01:25:38 And that’s perhaps what nonhuman systems
01:25:41 might be able to reach farther, not just as tools,
01:25:44 but in themselves understanding something
01:25:47 that they can bring back.
01:25:48 Yeah, it could well be.
01:25:50 So, I mean, there’s so many things
01:25:51 that are sort of encapsulated in what you just said there.
01:25:55 I think first of all, there’s two different things.
01:25:58 There’s like, what do we understand today?
01:26:00 What could the human mind understand?
01:26:02 And what is the totality of what is there to be understood?
01:26:06 And so there’s three concentric,
01:26:08 you can think of them as three larger and larger trees
01:26:10 or exploring more branches of that tree.
01:26:12 And I think with AI, we’re gonna explore that whole lot.
01:26:15 Now, the question is, if you think about
01:26:19 what is the totality of what could be understood,
01:26:22 there may be some fundamental physics reasons
01:26:24 why certain things can’t be understood,
01:26:26 like what’s outside a simulation or outside the universe.
01:26:29 Maybe it’s not understandable from within the universe.
01:26:32 So there may be some hard constraints like that.
01:26:34 It could be smaller constraints,
01:26:36 like we think of space time as fundamental.
01:26:40 Our human brains are really used to this idea
01:26:42 of a three dimensional world with time, maybe.
01:26:46 But our tools could go beyond that.
01:26:47 They wouldn’t have that limitation necessarily.
01:26:49 They could think in 11 dimensions, 12 dimensions,
01:26:51 whatever is needed.
01:26:52 But we could still maybe understand that
01:26:55 in several different ways.
01:26:56 The example I always give is,
01:26:59 when I play Garry Kasparov for speed chess,
01:27:01 or we’ve talked about chess and these kinds of things,
01:27:04 you know, if you’re reasonably good at chess,
01:27:07 you can’t come up with the move Garry comes up with
01:27:11 in his move, but he can explain it to you.
01:27:13 And you can understand.
01:27:14 And you can understand post hoc the reasoning.
01:27:16 So I think there’s an even further level of like,
01:27:19 well, maybe you couldn’t have invented that thing,
01:27:21 but going back to using language again,
01:27:24 perhaps you can understand and appreciate that.
01:27:27 Same way that you can appreciate, you know,
01:27:28 Vivaldi or Mozart or something without,
01:27:31 you can appreciate the beauty of that
01:27:32 without being able to construct it yourself, right?
01:27:35 Invent the music yourself.
01:27:37 So I think we see this in all forms of life.
01:27:39 So it will be that times, you know, a million,
01:27:42 but you can imagine also one sign of intelligence
01:27:45 is the ability to explain things clearly and simply, right?
01:27:49 You know, people like Richard Feynman,
01:27:50 another one of my old time heroes used to say that, right?
01:27:52 If you can’t, you know, if you can explain it
01:27:54 something simply, then that’s the best sign,
01:27:57 a complex topic simply,
01:27:58 then that’s one of the best signs of you understanding it.
01:28:00 Yeah.
01:28:01 I can see myself talking trash in the AI system in that way.
01:28:04 Yes.
01:28:05 It gets frustrated how dumb I am
01:28:07 and trying to explain something to me.
01:28:09 I was like, well, that means you’re not intelligent
01:28:11 because if you were intelligent,
01:28:12 you’d be able to explain it simply.
01:28:14 Yeah, of course, you know, there’s also the other option.
01:28:16 Of course, we could enhance ourselves and with our devices,
01:28:19 we are already sort of symbiotic with our compute devices,
01:28:23 right, with our phones and other things.
01:28:24 And, you know, there’s stuff like Neuralink and Xceptra
01:28:27 that could advance that further.
01:28:30 So I think there’s lots of really amazing possibilities
01:28:33 that I could foresee from here.
01:28:35 Well, let me ask you some wild questions.
01:28:37 So out there looking for friends,
01:28:39 do you think there’s a lot of alien civilizations out there?
01:28:43 So I guess this also goes back
01:28:44 to your origin of life question too,
01:28:46 because I think that that’s key.
01:28:48 My personal opinion, looking at all this,
01:28:51 and, you know, it’s one of my hobbies, physics, I guess.
01:28:53 So, you know, it’s something I think about a lot
01:28:56 and talk to a lot of experts on and read a lot of books on.
01:29:00 And I think my feeling currently is that we are alone.
01:29:05 I think that’s the most likely scenario
01:29:07 given what evidence we have.
01:29:08 So, and the reasoning is I think that, you know,
01:29:13 we’ve tried since things like SETI program
01:29:16 and I guess since the dawning of the space age,
01:29:19 we’ve, you know, had telescopes,
01:29:21 open radio telescopes and other things.
01:29:23 And if you think about and try to detect signals,
01:29:27 now, if you think about the evolution of humans on earth,
01:29:30 we could have easily been a million years ahead
01:29:33 of our time now or million years behind,
01:29:36 right, easily with just some slightly different quirk
01:29:39 thing happening hundreds of thousands of years ago.
01:29:42 You know, things could have been slightly different
01:29:43 if the meteor would hit the dinosaurs a million years earlier,
01:29:46 maybe things would have evolved.
01:29:48 We’d be a million years ahead of where we are now.
01:29:50 So what that means is if you imagine where humanity will be
01:29:54 in a few hundred years, let alone a million years,
01:29:56 especially if we hopefully, you know,
01:29:59 solve things like climate change and other things,
01:30:02 and we continue to flourish and we build things like AI
01:30:05 and we do space traveling and all of the stuff
01:30:07 that humans have dreamed of forever, right?
01:30:10 And sci fi is talked about forever.
01:30:14 We will be spreading across the stars, right?
01:30:16 And von Neumann famously calculated, you know,
01:30:19 it would only take about a million years
01:30:20 if you sent out von Neumann probes to the nearest,
01:30:23 you know, the nearest other solar systems.
01:30:26 And then all they did was build two more versions
01:30:29 of themselves and sent those two out
01:30:30 to the next nearest systems.
01:30:32 You know, within a million years,
01:30:33 I think you would have one of these probes
01:30:35 in every system in the galaxy.
01:30:36 So it’s not actually in cosmological time.
01:30:40 That’s actually a very short amount of time.
01:30:42 So, and you know, people like Dyson have thought
01:30:44 about constructing Dyson spheres around stars
01:30:47 to collect all the energy coming out of the star.
01:30:49 You know, there would be constructions like that
01:30:51 would be visible across space,
01:30:54 probably even across a galaxy.
01:30:56 So, and then, you know, if you think about
01:30:57 all of our radio, television emissions
01:31:00 that have gone out since the, you know, 30s and 40s,
01:31:05 imagine a million years of that.
01:31:06 And now hundreds of civilizations doing that.
01:31:10 When we opened our ears at the point
01:31:12 we got technologically sophisticated enough
01:31:14 in the space age,
01:31:15 we should have heard a cacophony of voices.
01:31:19 We should have joined that cacophony of voices.
01:31:20 And what we did, we opened our ears and we heard nothing.
01:31:24 And many people who argue that there are aliens
01:31:27 would say, well, we haven’t really done
01:31:28 exhaustive search yet.
01:31:29 And maybe we’re looking in the wrong bands
01:31:31 and we’ve got the wrong devices
01:31:33 and we wouldn’t notice what an alien form was like
01:31:36 because it’d be so different to what we’re used to.
01:31:38 But, you know, I don’t really buy that,
01:31:40 that it shouldn’t be as difficult as that.
01:31:42 Like, I think we’ve searched enough.
01:31:44 There should be everywhere.
01:31:45 If it was, yeah, it should be everywhere.
01:31:47 We should see Dyson spheres being put up,
01:31:49 sun’s blinking in and out.
01:31:50 You know, there should be a lot of evidence
01:31:52 for those things.
01:31:52 And then there are other people who argue,
01:31:54 well, the sort of safari view of like,
01:31:56 well, we’re a primitive species still
01:31:57 because we’re not space faring yet.
01:31:59 And we’re, you know, there’s some kind of global,
01:32:01 like universal rule not to interfere,
01:32:03 you know, Star Trek rule.
01:32:04 But like, look, we can’t even coordinate humans
01:32:07 to deal with climate change and we’re one species.
01:32:10 What is the chance that of all of these different
01:32:12 human civilization, you know, alien civilizations,
01:32:14 they would have the same priorities
01:32:16 and agree across these kinds of matters.
01:32:20 And even if that was true
01:32:21 and we were in some sort of safari for our own good,
01:32:25 to me, that’s not much different
01:32:26 from the simulation hypothesis
01:32:27 because what does it mean, the simulation hypothesis?
01:32:29 I think in its most fundamental level,
01:32:31 it means what we’re seeing is not quite reality, right?
01:32:34 It’s something, there’s something more deeper underlying it,
01:32:37 maybe computational.
01:32:39 Now, if we were in a sort of safari park
01:32:42 and everything we were seeing was a hologram
01:32:44 and it was projected by the aliens or whatever,
01:32:46 that to me is not much different
01:32:47 than thinking we’re inside of another universe
01:32:50 because we still can’t see true reality, right?
01:32:53 I mean, there’s other explanations.
01:32:55 It could be that the way they’re communicating
01:32:58 is just fundamentally different,
01:32:59 that we’re too dumb to understand the much better methods
01:33:02 of communication they have.
01:33:03 It could be, I mean, it’s silly to say,
01:33:06 but our own thoughts could be the methods
01:33:09 by which they’re communicating.
01:33:11 Like the place from which our ideas,
01:33:13 writers talk about this, like the muse.
01:33:15 Yeah.
01:33:17 I mean, it sounds like very kind of wild,
01:33:20 but it could be thoughts.
01:33:22 It could be some interactions with our mind
01:33:24 that we think are originating from us
01:33:27 is actually something that is coming
01:33:31 from other life forms elsewhere.
01:33:33 Consciousness itself might be that.
01:33:34 It could be, but I don’t see any sensible argument
01:33:37 to the why would all of the alien species
01:33:40 behave in this way?
01:33:41 Yeah, some of them will be more primitive.
01:33:43 They will be close to our level.
01:33:44 There should be a whole sort of normal distribution
01:33:47 of these things, right?
01:33:48 Some would be aggressive.
01:33:49 Some would be curious.
01:33:52 Others would be very historical and philosophical
01:33:55 because maybe they’re a million years older than us,
01:33:58 but it’s not, it shouldn’t be like,
01:34:00 I mean, one alien civilization might be like that,
01:34:03 communicating thoughts and others,
01:34:04 but I don’t see why potentially the hundreds there should be
01:34:07 would be uniform in this way, right?
01:34:10 It could be a violent dictatorship that the people,
01:34:13 the alien civilizations that become successful
01:34:20 gain the ability to be destructive,
01:34:23 an order of magnitude more destructive,
01:34:26 but of course the sad thought,
01:34:29 well, either humans are very special.
01:34:32 We took a lot of leaps that arrived
01:34:35 at what it means to be human.
01:34:38 There’s a question there, which was the hardest,
01:34:41 which was the most special,
01:34:42 but also if others have reached this level
01:34:45 and maybe many others have reached this level,
01:34:47 the great filter that prevented them from going farther
01:34:52 to becoming a multi planetary species
01:34:54 or reaching out into the stars.
01:34:57 And those are really important questions for us,
01:34:59 whether there’s other alien civilizations out there or not,
01:35:04 this is very useful for us to think about.
01:35:06 If we destroy ourselves, how will we do it?
01:35:10 And how easy is it to do?
01:35:11 Yeah, well, these are big questions
01:35:14 and I’ve thought about these a lot,
01:35:15 but the interesting thing is that if we’re alone,
01:35:19 that’s somewhat comforting from the great filter perspective
01:35:22 because it probably means the great filters were passed us.
01:35:25 And I’m pretty sure they are.
01:35:26 So going back to your origin of life question,
01:35:29 there are some incredible things
01:35:30 that no one knows how happened,
01:35:31 like obviously the first life form from chemical soup,
01:35:35 that seems pretty hard,
01:35:36 but I would guess the multicellular,
01:35:38 I wouldn’t be that surprised if we saw single cell
01:35:42 sort of life forms elsewhere, bacteria type things,
01:35:45 but multicellular life seems incredibly hard,
01:35:48 that step of capturing mitochondria
01:35:50 and then sort of using that as part of yourself,
01:35:53 you know, when you’ve just eaten it.
01:35:53 Would you say that’s the biggest, the most,
01:35:57 like if you had to choose one sort of,
01:36:01 Hitchhiker’s Galaxy, one sentence summary of like,
01:36:04 oh, those clever creatures did this,
01:36:07 that would be the multicellular.
01:36:08 I think that was probably the one that’s the biggest.
01:36:10 I mean, there’s a great book
01:36:11 called The 10 Great Inventions of Evolution by Nick Lane,
01:36:14 and he speculates on 10 of these, you know,
01:36:17 what could be great filters.
01:36:19 I think that’s one.
01:36:21 I think the advent of intelligence
01:36:23 and conscious intelligence and in order, you know,
01:36:26 to us to be able to do science and things like that
01:36:28 is huge as well.
01:36:29 I mean, it’s only evolved once as far as, you know,
01:36:32 in Earth history.
01:36:34 So that would be a later candidate,
01:36:37 but there’s certainly for the early candidates,
01:36:39 I think multicellular life forms is huge.
01:36:41 By the way, what it’s interesting to ask you,
01:36:43 if you can hypothesize about
01:36:45 what is the origin of intelligence?
01:36:48 Is it that we started cooking meat over fire?
01:36:53 Is it that we somehow figured out
01:36:55 that we could be very powerful when we started collaborating?
01:36:58 So cooperation between our ancestors
01:37:03 so that we can overthrow the alpha male.
01:37:07 What is it, Richard?
01:37:07 I talked to Richard Ranham,
01:37:08 who thinks we’re all just beta males
01:37:10 who figured out how to collaborate to defeat the one,
01:37:13 the dictator, the authoritarian alpha male
01:37:16 that controlled the tribe.
01:37:18 Is there other explanation?
01:37:20 Was there 2001 Space Odyssey type of monolith
01:37:24 that came down to Earth?
01:37:25 Well, I think all of those things
01:37:27 you suggested are good candidates,
01:37:28 fire and cooking, right?
01:37:30 So that’s clearly important for energy efficiency,
01:37:35 cooking our meat and then being able to be more efficient
01:37:39 about eating it and consuming the energy.
01:37:42 I think that’s huge and then utilizing fire and tools
01:37:45 I think you’re right about the tribal cooperation aspects
01:37:48 and probably language is part of that
01:37:51 because probably that’s what allowed us
01:37:52 to outcompete Neanderthals
01:37:53 and perhaps less cooperative species.
01:37:56 So that may be the case.
01:37:58 Tool making, spears, axes, I think that let us,
01:38:02 I mean, I think it’s pretty clear now
01:38:03 that humans were responsible
01:38:05 for a lot of the extinctions of megafauna,
01:38:07 especially in the Americas when humans arrived.
01:38:10 So you can imagine once you discover tool usage
01:38:14 how powerful that would have been
01:38:15 and how scary for animals.
01:38:17 So I think all of those could have been explanations for it.
01:38:20 The interesting thing is that it’s a bit
01:38:22 like general intelligence too,
01:38:24 is it’s very costly to begin with to have a brain
01:38:28 and especially a general purpose brain
01:38:29 rather than a special purpose one
01:38:30 because the amount of energy our brains use,
01:38:32 I think it’s like 20% of the body’s energy
01:38:34 and it’s massive and even your thinking chest,
01:38:36 one of the funny things that we used to say
01:38:39 is it’s as much as a racing driver uses
01:38:41 for a whole Formula One race,
01:38:43 just playing a game of serious high level chess,
01:38:46 which you wouldn’t think just sitting there
01:38:49 because the brain’s using so much energy.
01:38:52 So in order for an animal, an organism to justify that,
01:38:54 there has to be a huge payoff.
01:38:57 And the problem with half a brain
01:39:00 or half intelligence, say an IQs of like a monkey brain,
01:39:06 it’s not clear you can justify that evolutionary
01:39:10 until you get to the human level brain.
01:39:12 And so, but how do you do that jump?
01:39:14 It’s very difficult,
01:39:15 which is why I think it has only been done once
01:39:17 from the sort of specialized brains that you see in animals
01:39:19 to this sort of general purpose,
01:39:22 chewing powerful brains that humans have
01:39:26 and which allows us to invent the modern world.
01:39:29 And it takes a lot to cross that barrier.
01:39:33 And I think we’ve seen the same with AI systems,
01:39:35 which is that maybe until very recently,
01:39:38 it’s always been easier to craft a specific solution
01:39:40 to a problem like chess than it has been
01:39:43 to build a general learning system
01:39:44 that could potentially do many things.
01:39:46 Cause initially that system will be way worse
01:39:49 than less efficient than the specialized system.
01:39:52 So one of the interesting quirks of the human mind
01:39:55 of this evolved system is that it appears to be conscious.
01:40:01 This thing that we don’t quite understand,
01:40:02 but it seems very special is ability
01:40:07 to have a subjective experience
01:40:08 that it feels like something to eat a cookie,
01:40:12 the deliciousness of it or see a color
01:40:14 and that kind of stuff.
01:40:15 Do you think in order to solve intelligence,
01:40:17 we also need to solve consciousness along the way?
01:40:20 Do you think AGI systems need to have consciousness
01:40:23 in order to be truly intelligent?
01:40:28 Yeah, we thought about this a lot actually.
01:40:29 And I think that my guess is that consciousness
01:40:33 and intelligence are double dissociable.
01:40:35 So you can have one without the other both ways.
01:40:38 And I think you can see that with consciousness
01:40:40 in that I think some animals and pets,
01:40:44 if you have a pet dog or something like that,
01:40:46 you can see some of the higher animals and dolphins,
01:40:48 things like that have self awareness
01:40:51 and are very sociable, seem to dream.
01:40:57 A lot of the traits one would regard
01:40:59 as being kind of conscious and self aware,
01:41:02 but yet they’re not that smart, right?
01:41:05 So they’re not that intelligent
01:41:06 by say IQ standards or something like that.
01:41:08 Yeah, it’s also possible that our understanding
01:41:11 of intelligence is flawed, like putting an IQ to it.
01:41:14 Maybe the thing that a dog can do
01:41:17 is actually gone very far along the path of intelligence
01:41:20 and we humans are just able to play chess
01:41:23 and maybe write poems.
01:41:24 Right, but if we go back to the idea of AGI
01:41:27 and general intelligence, dogs are very specialized, right?
01:41:29 Most animals are pretty specialized.
01:41:30 They can be amazing at what they do,
01:41:32 but they’re like kind of elite sports people or something,
01:41:35 right, so they do one thing extremely well
01:41:38 because their entire brain is optimized.
01:41:40 They have somehow convinced the entirety
01:41:41 of the human population to feed them and service them.
01:41:44 So in some way they’re controlling.
01:41:46 Yes, exactly.
01:41:47 Well, we co evolved to some crazy degree, right?
01:41:50 Including the way the dogs even wag their tails
01:41:53 and twitch their noses, right?
01:41:55 We find inextricably cute.
01:41:58 But I think you can also see intelligence on the other side.
01:42:01 So systems like artificial systems
01:42:03 that are amazingly smart at certain things
01:42:07 like maybe playing go and chess and other things,
01:42:09 but they don’t feel at all in any shape or form conscious
01:42:13 in the way that you do to me or I do to you.
01:42:17 And I think actually building AI
01:42:21 is these intelligent constructs
01:42:24 is one of the best ways to explore
01:42:25 the mystery of consciousness, to break it down
01:42:28 because we’re gonna have devices
01:42:31 that are pretty smart at certain things
01:42:34 or capable at certain things,
01:42:36 but potentially won’t have any semblance
01:42:39 of self awareness or other things.
01:42:40 And in fact, I would advocate if there’s a choice,
01:42:43 building systems in the first place,
01:42:45 AI systems that are not conscious to begin with
01:42:48 are just tools until we understand them better
01:42:52 and the capabilities better.
01:42:53 So on that topic, just not as the CEO of DeepMind,
01:42:58 just as a human being, let me ask you
01:43:00 about this one particular anecdotal evidence
01:43:03 of the Google engineer who made a comment
01:43:07 or believed that there’s some aspect of a language model,
01:43:11 the Lambda language model that exhibited sentience.
01:43:15 So you said you believe there might be a responsibility
01:43:18 to build systems that are not sentient.
01:43:21 And this experience of a particular engineer,
01:43:23 I think I’d love to get your general opinion
01:43:25 on this kind of thing, but I think it will happen
01:43:28 more and more and more, which not when engineers,
01:43:31 but when people out there that don’t have
01:43:33 an engineering background start interacting
01:43:34 with increasingly intelligent systems,
01:43:37 we anthropomorphize them.
01:43:38 They start to have deep, impactful interactions with us
01:43:44 in a way that we miss them when they’re gone.
01:43:47 And we sure as heck feel like they’re living entities,
01:43:51 self aware entities, and maybe even
01:43:54 we project sentience onto them.
01:43:55 So what’s your thought about this particular system?
01:44:01 Have you ever met a language model that’s sentient?
01:44:04 No, no.
01:44:06 What do you make of the case of when you kind of feel
01:44:10 that there’s some elements of sentience to the system?
01:44:12 Yeah, so this is an interesting question
01:44:15 and obviously a very fundamental one.
01:44:17 So the first thing to say is I think that none
01:44:20 of the systems we have today, I would say,
01:44:22 even have one iota of semblance
01:44:25 of consciousness or sentience.
01:44:26 That’s my personal feeling interacting with them every day.
01:44:29 So I think this way premature to be discussing
01:44:32 what that engineer talked about.
01:44:34 I think at the moment it’s more of a projection
01:44:36 of the way our own minds work,
01:44:37 which is to see sort of purpose and direction
01:44:43 in almost anything that we, you know,
01:44:44 our brains are trained to interpret agency,
01:44:48 basically in things, even inanimate things sometimes.
01:44:52 And of course with a language system,
01:44:54 because language is so fundamental to intelligence,
01:44:57 that’s going to be easy for us to anthropomorphize that.
01:45:00 I mean, back in the day, even the first, you know,
01:45:03 the dumbest sort of template chatbots ever,
01:45:05 Eliza and the ilk of the original chatbots
01:45:09 back in the sixties fooled some people
01:45:11 under certain circumstances, right?
01:45:12 It pretended to be a psychologist.
01:45:14 So just basically rabbit back to you
01:45:16 the same question you asked it back to you.
01:45:19 And some people believe that.
01:45:21 So I don’t think we can, this is why I think
01:45:23 the Turing test is a little bit flawed as a formal test
01:45:25 because it depends on the sophistication of the judge,
01:45:29 whether or not they are qualified to make that distinction.
01:45:33 So I think we should talk to, you know,
01:45:36 the top philosophers about this,
01:45:38 people like Daniel Dennett and David Chalmers and others
01:45:41 who’ve obviously thought deeply about consciousness.
01:45:43 Of course, consciousness itself hasn’t been well,
01:45:46 there’s no agreed definition.
01:45:47 If I was to, you know, speculate about that, you know,
01:45:52 I kind of, the working definition I like is
01:45:55 it’s the way information feels when it gets processed.
01:45:58 I think maybe Max Tegmark came up with that.
01:46:00 I like that idea.
01:46:01 I don’t know if it helps us get towards
01:46:02 any more operational thing,
01:46:03 but I think it’s a nice way of viewing it.
01:46:07 I think we can obviously see from neuroscience
01:46:10 certain prerequisites that are required,
01:46:11 like self awareness, I think is necessary,
01:46:14 but not sufficient component.
01:46:16 This idea of a self and other
01:46:18 and set of coherent preferences
01:46:20 that are coherent over time.
01:46:22 You know, these things are maybe memory.
01:46:24 These things are probably needed
01:46:26 for a sentient or conscious being.
01:46:29 But the reason, the difficult thing,
01:46:31 I think for us when we get,
01:46:32 and I think this is a really interesting
01:46:33 philosophical debate is when we get closer to AGI
01:46:37 and, you know, and much more powerful systems
01:46:40 than we have today,
01:46:42 how are we going to make this judgment?
01:46:44 And one way, which is the Turing test
01:46:46 is sort of a behavioral judgment,
01:46:48 is the system exhibiting all the behaviors
01:46:52 that a human sentient or a sentient being would exhibit?
01:46:56 Is it answering the right questions?
01:46:58 Is it saying the right things?
01:46:59 Is it indistinguishable from a human?
01:47:01 And so on.
01:47:03 But I think there’s a second thing
01:47:05 that makes us as humans regard each other as sentient,
01:47:09 right?
01:47:09 Why do we think this?
01:47:10 And I debated this with Daniel Dennett.
01:47:12 And I think there’s a second reason
01:47:13 that’s often overlooked,
01:47:15 which is that we’re running on the same substrate, right?
01:47:18 So if we’re exhibiting the same behavior,
01:47:21 more or less as humans,
01:47:22 and we’re running on the same, you know,
01:47:24 carbon based biological substrate,
01:47:26 the squishy, you know, few pounds of flesh in our skulls,
01:47:29 then the most parsimonious, I think, explanation
01:47:32 is that you’re feeling the same thing as I’m feeling, right?
01:47:35 But we will never have that second part,
01:47:37 the substrate equivalence with a machine, right?
01:47:41 So we will have to only judge based on the behavior.
01:47:43 And I think the substrate equivalence
01:47:45 is a critical part of why we make assumptions
01:47:48 that we’re conscious.
01:47:49 And in fact, even with animals, high level animals,
01:47:51 why we think they might be,
01:47:52 because they’re exhibiting some of the behaviors
01:47:54 we would expect from a sentient animal.
01:47:55 And we know they’re made of the same things,
01:47:57 biological neurons.
01:47:58 So we’re gonna have to come up with explanations
01:48:02 or models of the gap between substrate differences,
01:48:06 between machines and humans
01:48:08 to get anywhere beyond the behavioral.
01:48:10 But to me, sort of the practical question
01:48:12 is very interesting and very important.
01:48:16 When you have millions, perhaps billions of people
01:48:18 believing that you have a sentient AI,
01:48:20 believing what that Google engineer believed,
01:48:24 which I just see as an obvious, very near term future thing,
01:48:28 certainly on the path to AGI,
01:48:31 how does that change the world?
01:48:33 What’s the responsibility of the AI system
01:48:35 to help those millions of people?
01:48:38 And also what’s the ethical thing?
01:48:39 Because you can make a lot of people happy
01:48:44 by creating a meaningful, deep experience
01:48:48 with a system that’s faking it before it makes it.
01:48:52 And I don’t, are we the right,
01:48:56 who is to say what’s the right thing to do?
01:48:59 Should AI always be tools?
01:49:01 Why are we constraining AI to always be tools
01:49:05 as opposed to friends?
01:49:07 Yeah, I think, well, I mean, these are fantastic questions
01:49:11 and also critical ones.
01:49:13 And we’ve been thinking about this
01:49:16 since the start of DeepMind and before that,
01:49:18 because we plan for success
01:49:19 and however remote that looked like back in 2010.
01:49:24 And we’ve always had sort of these ethical considerations
01:49:26 as fundamental at DeepMind.
01:49:29 And my current thinking on the language models
01:49:32 and large models is they’re not ready,
01:49:33 we don’t understand them well enough yet.
01:49:36 And in terms of analysis tools and guard rails,
01:49:40 what they can and can’t do and so on,
01:49:42 to deploy them at scale, because I think,
01:49:45 there are big, still ethical questions
01:49:46 like should an AI system always announce
01:49:48 that it is an AI system to begin with?
01:49:50 Probably yes.
01:49:52 What do you do about answering those philosophical questions
01:49:55 about the feelings people may have about AI systems,
01:49:58 perhaps incorrectly attributed?
01:50:00 So I think there’s a whole bunch of research
01:50:02 that needs to be done first to responsibly,
01:50:06 before you can responsibly deploy these systems at scale.
01:50:09 That will be at least be my current position.
01:50:12 Over time, I’m very confident we’ll have those tools
01:50:15 like interpretability questions and analysis questions.
01:50:20 And then with the ethical quandary,
01:50:23 I think there it’s important to look beyond just science.
01:50:28 That’s why I think philosophy, social sciences,
01:50:31 even theology, other things like that come into it,
01:50:34 where arts and humanities,
01:50:37 what does it mean to be human and the spirit of being human
01:50:40 and to enhance that and the human condition, right?
01:50:43 And allow us to experience things
01:50:45 we could never experience before
01:50:46 and improve the overall human condition
01:50:49 and humanity overall, get radical abundance,
01:50:51 solve many scientific problems, solve disease.
01:50:54 So this is the era I think, this is the amazing era
01:50:56 I think we’re heading into if we do it right.
01:50:59 But we’ve got to be careful.
01:51:00 We’ve already seen with things like social media,
01:51:02 how dual use technologies can be misused by,
01:51:05 firstly, by bad actors or naive actors or crazy actors,
01:51:12 right, so there’s that set of just the common
01:51:14 or garden misuse of existing dual use technology.
01:51:18 And then of course, there’s an additional thing
01:51:20 that has to be overcome with AI
01:51:21 that eventually it may have its own agency.
01:51:24 So it could be good or bad in and of itself.
01:51:28 So I think these questions have to be approached
01:51:31 very carefully using the scientific method, I would say,
01:51:35 in terms of hypothesis generation, careful control testing,
01:51:38 not live A, B testing out in the world,
01:51:40 because with powerful technologies like AI,
01:51:44 if something goes wrong, it may cause a lot of harm
01:51:47 before you can fix it.
01:51:49 It’s not like an imaging app or game app
01:51:52 where if something goes wrong, it’s relatively easy to fix
01:51:56 and the harm is relatively small.
01:51:57 So I think it comes with the usual cliche of,
01:52:02 like with a lot of power comes a lot of responsibility.
01:52:05 And I think that’s the case here with things like AI,
01:52:07 given the enormous opportunity in front of us.
01:52:11 And I think we need a lot of voices
01:52:14 and as many inputs into things like the design
01:52:17 of the systems and the values they should have
01:52:19 and what goals should they be put to.
01:52:22 I think as wide a group of voices as possible
01:52:24 beyond just the technologists is needed to input into that
01:52:27 and to have a say in that,
01:52:29 especially when it comes to deployment of these systems,
01:52:31 which is when the rubber really hits the road,
01:52:33 it really affects the general person in the street
01:52:35 rather than fundamental research.
01:52:37 And that’s why I say, I think as a first step,
01:52:40 it would be better if we have the choice
01:52:42 to build these systems as tools to give,
01:52:45 and I’m not saying that they should never go beyond tools
01:52:47 because of course the potential is there
01:52:50 for it to go way beyond just tools.
01:52:52 But I think that would be a good first step
01:52:55 in order for us to allow us to carefully experiment
01:52:58 and understand what these things can do.
01:53:01 So the leap between tool, the sentient entity being
01:53:05 is one we should take very careful of.
01:53:08 Let me ask a dark personal question.
01:53:11 So you’re one of the most brilliant people
01:53:13 in the AI community, you’re also one of the most kind
01:53:16 and if I may say sort of loved people in the community.
01:53:20 That said, creation of a super intelligent AI system
01:53:25 would be one of the most powerful things in the world,
01:53:32 tools or otherwise.
01:53:34 And again, as the old saying goes, power corrupts
01:53:38 and absolute power corrupts absolutely.
01:53:41 You are likely to be one of the people,
01:53:47 I would say probably the most likely person
01:53:50 to be in the control of such a system.
01:53:53 Do you think about the corrupting nature of power
01:53:57 when you talk about these kinds of systems
01:53:59 that as all dictators and people have caused atrocities
01:54:04 in the past, always think they’re doing good,
01:54:07 but they don’t do good because the power
01:54:10 has polluted their mind about what is good
01:54:12 and what is evil.
01:54:13 Do you think about this stuff
01:54:14 or are we just focused on language model?
01:54:16 No, I think about them all the time
01:54:18 and I think what are the defenses against that?
01:54:22 I think one thing is to remain very grounded
01:54:24 and sort of humble, no matter what you do or achieve.
01:54:28 And I try to do that, my best friends
01:54:31 are still my set of friends
01:54:32 from my undergraduate Cambridge days,
01:54:34 my family’s and friends are very important.
01:54:39 I’ve always, I think trying to be a multidisciplinary person,
01:54:42 it helps to keep you humble
01:54:43 because no matter how good you are at one topic,
01:54:45 someone will be better than you at that.
01:54:47 And always relearning a new topic again from scratch
01:54:50 is a new field is very humbling, right?
01:54:53 So for me, that’s been biology over the last five years,
01:54:56 huge area topic and I just love doing that,
01:55:00 but it helps to keep you grounded
01:55:01 like it keeps you open minded.
01:55:04 And then the other important thing
01:55:06 is to have a really good, amazing set of people around you
01:55:10 at your company or your organization
01:55:11 who are also very ethical and grounded themselves
01:55:14 and help to keep you that way.
01:55:16 And then ultimately just to answer your question,
01:55:18 I hope we’re gonna be a big part of birthing AI
01:55:22 and that being the greatest benefit to humanity
01:55:24 of any tool or technology ever,
01:55:26 and getting us into a world of radical abundance
01:55:29 and curing diseases and solving many of the big challenges
01:55:33 we have in front of us.
01:55:34 And then ultimately help the ultimate flourishing
01:55:37 of humanity to travel the stars
01:55:39 and find those aliens if they are there.
01:55:41 And if they’re not there, find out why they’re not there,
01:55:43 what is going on here in the universe.
01:55:46 This is all to come.
01:55:47 And that’s what I’ve always dreamed about.
01:55:50 But I think AI is too big an idea.
01:55:53 It’s not going to be,
01:55:54 there’ll be a certain set of pioneers who get there first.
01:55:57 I hope we’re in the vanguard
01:55:58 so we can influence how that goes.
01:56:00 And I think it matters who builds,
01:56:02 which cultures they come from and what values they have,
01:56:06 the builders of AI systems.
01:56:07 Cause I think even though the AI system
01:56:09 is gonna learn for itself most of its knowledge,
01:56:11 there’ll be a residue in the system of the culture
01:56:14 and the values of the creators of that system.
01:56:17 And there’s interesting questions
01:56:18 to discuss about that geopolitically.
01:56:21 Different cultures,
01:56:22 we’re in a more fragmented world than ever, unfortunately.
01:56:24 I think in terms of global cooperation,
01:56:27 we see that in things like climate
01:56:29 where we can’t seem to get our act together globally
01:56:32 to cooperate on these pressing matters.
01:56:34 I hope that will change over time.
01:56:35 Perhaps if we get to an era of radical abundance,
01:56:38 we don’t have to be so competitive anymore.
01:56:40 Maybe we can be more cooperative
01:56:42 if resources aren’t so scarce.
01:56:44 It’s true that in terms of power corrupting
01:56:48 and leading to destructive things,
01:56:50 it seems that some of the atrocities of the past happen
01:56:53 when there’s a significant constraint on resources.
01:56:56 I think that’s the first thing.
01:56:57 I don’t think that’s enough.
01:56:58 I think scarcity is one thing that’s led to competition,
01:57:01 sort of zero sum game thinking.
01:57:03 I would like us to all be in a positive sum world.
01:57:06 And I think for that, you have to remove scarcity.
01:57:08 I don’t think that’s enough, unfortunately,
01:57:09 to get world peace
01:57:10 because there’s also other corrupting things
01:57:12 like wanting power over people and this kind of stuff,
01:57:15 which is not necessarily satisfied by just abundance.
01:57:19 But I think it will help.
01:57:22 But I think ultimately, AI is not gonna be run
01:57:24 by any one person or one organization.
01:57:26 I think it should belong to the world, belong to humanity.
01:57:29 And I think there’ll be many ways this will happen.
01:57:33 And ultimately, everybody should have a say in that.
01:57:36 Do you have advice for young people in high school,
01:57:42 in college, maybe if they’re interested in AI
01:57:45 or interested in having a big impact on the world,
01:57:50 what they should do to have a career they can be proud of
01:57:53 or to have a life they can be proud of?
01:57:55 I love giving talks to the next generation.
01:57:57 What I say to them is actually two things.
01:57:59 I think the most important things to learn about
01:58:02 and to find out about when you’re young
01:58:04 is what are your true passions is first of all,
01:58:07 as two things.
01:58:07 One is find your true passions.
01:58:09 And I think you can do that by,
01:58:11 the way to do that is to explore as many things as possible
01:58:14 when you’re young and you have the time
01:58:16 and you can take those risks.
01:58:19 I would also encourage people to look at
01:58:21 finding the connections between things in a unique way.
01:58:24 I think that’s a really great way to find a passion.
01:58:27 Second thing I would say, advise is know yourself.
01:58:30 So spend a lot of time understanding
01:58:33 how you work best.
01:58:35 Like what are the optimal times to work?
01:58:37 What are the optimal ways that you study?
01:58:39 What are your, how do you deal with pressure?
01:58:42 Sort of test yourself in various scenarios
01:58:44 and try and improve your weaknesses,
01:58:47 but also find out what your unique skills and strengths are
01:58:50 and then hone those.
01:58:52 So then that’s what will be your super value
01:58:54 in the world later on.
01:58:55 And if you can then combine those two things
01:58:57 and find passions that you’re genuinely excited about
01:59:01 that intersect with what your unique strong skills are,
01:59:05 then you’re onto something incredible
01:59:07 and I think you can make a huge difference in the world.
01:59:10 So let me ask about know yourself.
01:59:12 This is fun.
01:59:13 This is fun.
01:59:14 Quick questions about day in the life, the perfect day,
01:59:18 the perfect productive day in the life of Demis’s Hub.
01:59:21 Maybe these days you’re, there’s a lot involved.
01:59:26 So maybe a slightly younger Demis’s Hub
01:59:29 where you could focus on a single project maybe.
01:59:33 How early do you wake up?
01:59:34 Are you a night owl?
01:59:35 Do you wake up early in the morning?
01:59:36 What are some interesting habits?
01:59:39 How many dozens of cups of coffees do you drink a day?
01:59:42 What’s the computer that you use?
01:59:46 What’s the setup?
01:59:47 How many screens?
01:59:47 What kind of keyboard?
01:59:49 Are we talking Emacs Vim
01:59:51 or are we talking something more modern?
01:59:53 So there’s a bunch of those questions.
01:59:54 So maybe day in the life, what’s the perfect day involved?
01:59:58 Well, these days it’s quite different
02:00:00 from say 10, 20 years ago.
02:00:02 Back 10, 20 years ago, it would have been
02:00:05 a whole day of research, individual research or programming,
02:00:10 doing some experiment, neuroscience,
02:00:12 computer science experiment,
02:00:14 reading lots of research papers.
02:00:16 And then perhaps at nighttime,
02:00:19 reading science fiction books or playing some games.
02:00:25 But lots of focus, so like deep focused work
02:00:28 on whether it’s programming or reading research papers.
02:00:32 Yes, so that would be lots of deep focus work.
02:00:35 These days for the last sort of, I guess, five to 10 years,
02:00:39 I’ve actually got quite a structure
02:00:41 that works very well for me now,
02:00:42 which is that I’m a complete night owl, always have been.
02:00:46 So I optimize for that.
02:00:47 So I’ll basically do a normal day’s work,
02:00:50 get into work about 11 o clock
02:00:52 and sort of do work to about seven in the office.
02:00:56 And I will arrange back to back meetings
02:00:58 for the entire time of that.
02:01:00 And with as many, meet as many people as possible.
02:01:03 So that’s my collaboration management part of the day.
02:01:06 Then I go home, spend time with the family and friends,
02:01:10 have dinner, relax a little bit.
02:01:13 And then I start a second day of work.
02:01:15 I call it my second day of work around 10 p.m., 11 p.m.
02:01:18 And that’s the time to about the small hours of the morning,
02:01:21 four or five in the morning, where I will do my thinking
02:01:24 and reading and research, writing research papers.
02:01:29 Sadly, I don’t have time to code anymore,
02:01:30 but it’s not efficient to do that these days,
02:01:34 given the amount of time I have.
02:01:37 But that’s when I do, you know,
02:01:38 maybe do the long kind of stretches
02:01:40 of thinking and planning.
02:01:42 And then probably, you know, using email, other things,
02:01:45 I would set, I would fire off a lot of things to my team
02:01:47 to deal with the next morning.
02:01:49 But actually thinking about this overnight,
02:01:51 we should go for this project
02:01:53 or arrange this meeting the next day.
02:01:54 When you’re thinking through a problem,
02:01:56 are you talking about a sheet of paper with a pen?
02:01:58 Is there some structured process?
02:02:01 I still like pencil and paper best for working out things,
02:02:04 but these days it’s just so efficient
02:02:06 to read research papers just on the screen.
02:02:08 I still often print them out, actually.
02:02:10 I still prefer to mark out things.
02:02:12 And I find it goes into the brain better
02:02:14 and sticks in the brain better
02:02:16 when you’re still using physical pen and pencil and paper.
02:02:19 So you take notes with the…
02:02:20 I have lots of notes, electronic ones,
02:02:22 and also whole stacks of notebooks that I use at home, yeah.
02:02:27 On some of these most challenging next steps, for example,
02:02:30 stuff none of us know about that you’re working on,
02:02:33 you’re thinking,
02:02:35 there’s some deep thinking required there, right?
02:02:37 Like what is the right problem?
02:02:39 What is the right approach?
02:02:41 Because you’re gonna have to invest a huge amount of time
02:02:43 for the whole team.
02:02:44 They’re going to have to pursue this thing.
02:02:46 What’s the right way to do it?
02:02:48 Is RL gonna work here or not?
02:02:50 Yes.
02:02:50 What’s the right thing to try?
02:02:53 What’s the right benchmark to use?
02:02:55 Do we need to construct a benchmark from scratch?
02:02:57 All those kinds of things.
02:02:58 Yes.
02:02:59 So I think of all those kinds of things
02:03:00 in the nighttime phase, but also much more,
02:03:03 I find I’ve always found the quiet hours of the morning
02:03:07 when everyone’s asleep, it’s super quiet outside.
02:03:11 I love that time.
02:03:12 It’s the golden hours,
02:03:13 like between one and three in the morning.
02:03:16 Put some music on, some inspiring music on,
02:03:18 and then think these deep thoughts.
02:03:21 So that’s when I would read my philosophy books
02:03:24 and Spinoza’s, my recent favorite can, all these things.
02:03:28 And I read about a great scientist of history,
02:03:33 how they did things, how they thought things.
02:03:35 So that’s when you do all your creative,
02:03:37 that’s when I do all my creative thinking.
02:03:39 And it’s good, I think people recommend
02:03:41 you do your sort of creative thinking in one block.
02:03:45 And the way I organize the day,
02:03:47 that way I don’t get interrupted.
02:03:48 There’s obviously no one else is up at those times.
02:03:51 So I can go, I can sort of get super deep
02:03:55 and super into flow.
02:03:57 The other nice thing about doing it nighttime wise
02:03:59 is if I’m really onto something
02:04:02 or I’ve got really deep into something,
02:04:04 I can choose to extend it
02:04:06 and I’ll go into six in the morning, whatever.
02:04:09 And then I’ll just pay for it the next day.
02:04:10 So I’ll be a bit tired and I won’t be my best,
02:04:12 but that’s fine.
02:04:13 I can decide looking at my schedule the next day
02:04:16 and given where I’m at with this particular thought
02:04:19 or creative idea that I’m gonna pay that cost the next day.
02:04:22 So I think that’s more flexible than morning people
02:04:26 who do that, they get up at four in the morning.
02:04:28 They can also do those golden hours then,
02:04:31 but then their start of their scheduled day
02:04:32 starts at breakfast, 8 a.m.,
02:04:34 whatever they have their first meeting.
02:04:36 And then it’s hard, you have to reschedule a day
02:04:37 if you’re in flow.
02:04:38 So I don’t have to do that.
02:04:39 So that could be a true special thread of thoughts
02:04:41 that you’re too passionate about.
02:04:45 This is where some of the greatest ideas
02:04:46 could potentially come is when you just lose yourself
02:04:49 late into the night.
02:04:51 And for the meetings, I mean, you’re loading in
02:04:53 really hard problems in a very short amount of time.
02:04:56 So you have to do some kind of first principles thinking
02:04:58 here, it’s like, what’s the problem?
02:05:00 What’s the state of things?
02:05:01 What’s the right next steps?
02:05:03 You have to get really good at context switching,
02:05:05 which is one of the hardest things,
02:05:07 because especially as we do so many things,
02:05:09 if you include all the scientific things we do,
02:05:10 scientific fields we’re working in,
02:05:12 these are complex fields in themselves.
02:05:15 And you have to sort of keep abreast of that.
02:05:18 But I enjoy it.
02:05:20 I’ve always been a sort of generalist in a way.
02:05:23 And that’s actually what happened in my games career
02:05:25 after chess.
02:05:27 One of the reasons I stopped playing chess
02:05:29 was because I got into computers,
02:05:30 but also I started realizing there were many other
02:05:32 great games out there to play too.
02:05:33 So I’ve always been that way inclined, multidisciplinary.
02:05:36 And there’s too many interesting things in the world
02:05:39 to spend all your time just on one thing.
02:05:41 So you mentioned Spinoza, gotta ask the big, ridiculously
02:05:45 big question about life.
02:05:47 What do you think is the meaning of this whole thing?
02:05:50 Why are we humans here?
02:05:52 You’ve already mentioned that perhaps the universe
02:05:55 created us.
02:05:56 Is that why you think we’re here?
02:05:58 To understand how the universe works?
02:06:00 Yeah, I think my answer to that would be,
02:06:02 and at least the life I’m living,
02:06:03 is to gain and understand the knowledge,
02:06:08 to gain knowledge and understand the universe.
02:06:10 That’s what I think, I can’t see any higher purpose
02:06:13 than that if you think back to the classical Greeks,
02:06:15 the virtue of gaining knowledge.
02:06:17 It’s, I think it’s one of the few true virtues
02:06:20 is to understand the world around us
02:06:23 and the context and humanity better.
02:06:25 And I think if you do that, you become more compassionate
02:06:29 and more understanding yourself and more tolerant
02:06:32 and all these, I think all these other things
02:06:33 may flow from that.
02:06:34 And to me, understanding the nature of reality,
02:06:37 that is the biggest question.
02:06:38 What is going on here is sometimes the colloquial way I say.
02:06:41 What is really going on here?
02:06:43 It’s so mysterious.
02:06:44 I feel like we’re in some huge puzzle.
02:06:47 And it’s, but the world is also seems to be,
02:06:49 the universe seems to be structured in a way.
02:06:52 You know, why is it structured in a way
02:06:54 that science is even possible?
02:06:55 That, you know, methods, the scientific method works,
02:06:58 things are repeatable.
02:07:00 It feels like it’s almost structured in a way
02:07:02 to be conducive to gaining knowledge.
02:07:05 So I feel like, and you know,
02:07:06 why should computers be even possible?
02:07:07 Wasn’t that amazing that computational electronic devices
02:07:11 can be possible, and they’re made of sand,
02:07:15 our most common element that we have,
02:07:17 you know, silicon on the Earth’s crust.
02:07:19 It could have been made of diamond or something,
02:07:21 then we would have only had one computer.
02:07:23 So a lot of things are kind of slightly suspicious to me.
02:07:26 It sure as heck sounds, this puzzle sure as heck sounds
02:07:29 like something we talked about earlier,
02:07:30 what it takes to design a game that’s really fun to play
02:07:35 for prolonged periods of time.
02:07:36 And it does seem like this puzzle, like you mentioned,
02:07:40 the more you learn about it,
02:07:42 the more you realize how little you know.
02:07:44 So it humbles you, but excites you
02:07:46 by the possibility of learning more.
02:07:49 It’s one heck of a puzzle we got going on here.
02:07:53 So like I mentioned, of all the people in the world,
02:07:56 you’re very likely to be the one who creates the AGI system
02:08:02 that achieves human level intelligence and goes beyond it.
02:08:06 So if you got a chance and very well,
02:08:08 you could be the person that goes into the room
02:08:10 with the system and have a conversation.
02:08:13 Maybe you only get to ask one question.
02:08:15 If you do, what question would you ask her?
02:08:19 I would probably ask, what is the true nature of reality?
02:08:23 I think that’s the question.
02:08:24 I don’t know if I’d understand the answer
02:08:25 because maybe it would be 42 or something like that,
02:08:28 but that’s the question I would ask.
02:08:32 And then there’ll be a deep sigh from the systems,
02:08:34 like, all right, how do I explain to this human?
02:08:37 All right, let me, I don’t have time to explain.
02:08:41 Maybe I’ll draw you a picture that it is.
02:08:44 I mean, how do you even begin to answer that question?
02:08:51 Well, I think it would.
02:08:52 What would you think the answer could possibly look like?
02:08:55 I think it could start looking like
02:08:59 more fundamental explanations of physics
02:09:02 would be the beginning.
02:09:03 More careful specification of that,
02:09:05 taking you, walking us through by the hand
02:09:07 as to what one would do to maybe prove those things out.
02:09:10 Maybe giving you glimpses of what things
02:09:13 you totally miss in the physics of today.
02:09:15 Exactly, exactly.
02:09:16 Just here’s glimpses of, no, like there’s a much,
02:09:22 a much more elaborate world
02:09:23 or a much simpler world or something.
02:09:26 A much deeper, maybe simpler explanation of things,
02:09:30 right, than the standard model of physics,
02:09:31 which we know doesn’t work, but we still keep adding to.
02:09:34 So, and that’s how I think the beginning
02:09:37 of an explanation would look.
02:09:38 And it would start encompassing many of the mysteries
02:09:41 that we have wondered about for thousands of years,
02:09:43 like consciousness, dreaming, life, and gravity,
02:09:47 all of these things.
02:09:48 Yeah, giving us glimpses of explanations for those things.
02:09:52 Well, Damasir, one of the special human beings
02:09:57 in this giant puzzle of ours,
02:09:59 and it’s a huge honor that you would take a pause
02:10:01 from the bigger puzzle to solve this small puzzle
02:10:03 of a conversation with me today.
02:10:04 It’s truly an honor and a pleasure.
02:10:06 Thank you so much.
02:10:07 Thank you, I really enjoyed it.
02:10:07 Thanks, Lex.
02:10:09 Thanks for listening to this conversation
02:10:10 with Damas Ashabis.
02:10:11 To support this podcast,
02:10:13 please check out our sponsors in the description.
02:10:15 And now, let me leave you with some words
02:10:17 from Edgar Dykstra.
02:10:20 Computer science is no more about computers
02:10:23 than astronomy is about telescopes.
02:10:26 Thank you for listening, and hope to see you next time.