Noam Chomsky: Language, Cognition, and Deep Learning #53

Transcript

00:00:00 The following is a conversation with Noam Chomsky.

00:00:03 He’s truly one of the great minds of our time

00:00:06 and is one of the most cited scholars

00:00:08 in the history of our civilization.

00:00:10 He has spent over 60 years at MIT

00:00:13 and recently also joined the University of Arizona,

00:00:16 where we met for this conversation.

00:00:18 But it was at MIT about four and a half years ago

00:00:21 when I first met Noam.

00:00:23 My first few days there,

00:00:24 I remember getting into an elevator at Stata Center,

00:00:27 pressing the button for whatever floor,

00:00:29 looking up and realizing it was just me and Noam Chomsky

00:00:33 riding the elevator,

00:00:35 just me and one of the seminal figures of linguistics,

00:00:38 cognitive science, philosophy,

00:00:40 and political thought in the past century, if not ever.

00:00:43 I tell that silly story because I think life is made up

00:00:47 of funny little defining moments that you never forget

00:00:50 for reasons that may be too poetic to try and explain.

00:00:54 That was one of mine.

00:00:56 Noam has been an inspiration to me and millions of others.

00:00:59 It was truly an honor for me

00:01:01 to sit down with him in Arizona.

00:01:03 I traveled there just for this conversation.

00:01:06 And in a rare, heartbreaking moment,

00:01:09 after everything was set up and tested,

00:01:11 the camera was moved and accidentally,

00:01:13 the recording button was pressed, stopping the recording.

00:01:17 So I have good audio of both of us, but no video of Noam.

00:01:21 Just the video of me and my sleep deprived but excited face

00:01:25 that I get to keep as a reminder of my failures.

00:01:29 Most people just listen to this audio version

00:01:31 for the podcast as opposed to watching it on YouTube.

00:01:34 But still, it’s heartbreaking for me.

00:01:38 I hope you understand and still enjoy this conversation

00:01:40 as much as I did.

00:01:42 The depth of intellect that Noam showed

00:01:44 and his willingness to truly listen to me,

00:01:47 a silly looking Russian in a suit.

00:01:50 It was humbling and something I’m deeply grateful for.

00:01:55 As some of you know, this podcast is a side project for me,

00:01:59 where my main journey and dream is to build AI systems

00:02:03 that do some good for the world.

00:02:05 This latter effort takes up most of my time,

00:02:07 but for the moment has been mostly private.

00:02:10 But the former, the podcast,

00:02:12 is something I put my heart and soul into.

00:02:15 And I hope you feel that, even when I screw things up.

00:02:18 I recently started doing ads

00:02:21 at the end of the introduction.

00:02:22 I’ll do one or two minutes after introducing the episode

00:02:25 and never any ads in the middle

00:02:27 that break the flow of the conversation.

00:02:29 I hope that works for you

00:02:31 and doesn’t hurt the listening experience.

00:02:33 This is the Artificial Intelligence Podcast.

00:02:37 If you enjoy it, subscribe on YouTube,

00:02:39 give it five stars on Apple Podcast,

00:02:41 support it on Patreon,

00:02:43 or simply connect with me on Twitter,

00:02:45 at Lex Friedman, spelled F R I D M A N.

00:02:49 This show is presented by Cash App,

00:02:51 the number one finance app in the App Store.

00:02:54 I personally use Cash App to send money to friends,

00:02:56 but you can also use it to buy, sell,

00:02:58 and deposit Bitcoin in just seconds.

00:03:01 Cash App also has a new investing feature.

00:03:04 You can buy fractions of a stock, say $1 worth,

00:03:07 no matter what the stock price is.

00:03:09 Broker services are provided by Cash App Investing,

00:03:11 a subsidiary of Square and member SIPC.

00:03:15 I’m excited to be working with Cash App

00:03:17 to support one of my favorite organizations called The First,

00:03:20 best known for their FIRST Robotics and Lego competitions.

00:03:24 They educate and inspire hundreds of thousands of students

00:03:27 in over 110 countries

00:03:29 and have a perfect rating on Charity Navigator,

00:03:31 which means the donated money

00:03:33 is used to maximum effectiveness.

00:03:36 When you get Cash App from the App Store,

00:03:38 Google Play and use code LexPodcast,

00:03:42 you’ll get $10 and Cash App will also donate $10 to FIRST,

00:03:47 which again is an organization that I’ve personally seen

00:03:49 inspire girls and boys to dream of engineering a better world.

00:03:54 And now here’s my conversation with Noam Chomsky.

00:03:59 I apologize for the absurd philosophical question,

00:04:04 but if an alien species were to visit Earth,

00:04:07 do you think we would be able to find a common language

00:04:10 or protocol of communication with them?

00:04:13 There are arguments to the effect that we could.

00:04:18 In fact, one of them was Marv Minsky’s.

00:04:22 Back about 20 or 30 years ago,

00:04:24 he performed a brief experiment with a student of his,

00:04:30 Dan Bobrow, they essentially ran

00:04:33 the simplest possible touring machines,

00:04:36 just free to see what would happen.

00:04:39 And most of them crashed,

00:04:42 either got into an infinite loop or stopped.

00:04:47 The few that persisted,

00:04:51 essentially gave something like arithmetic.

00:04:55 And his conclusion from that was that

00:04:59 if some alien species developed higher intelligence,

00:05:04 they would at least have arithmetic,

00:05:07 they would at least have what the simplest computer would do.

00:05:12 And in fact, he didn’t know that at the time,

00:05:16 but the core principles of natural language

00:05:20 are based on operations which yield something

00:05:25 like arithmetic in the limiting case, in the minimal case.

00:05:29 So it’s conceivable that a mode of communication

00:05:34 could be established based on the core properties

00:05:38 of human language and the core properties of arithmetic,

00:05:41 which maybe are universally shared.

00:05:44 So it’s conceivable.

00:05:46 What is the structure of that language,

00:05:50 of language as an internal system inside our mind

00:05:55 versus an external system as it’s expressed?

00:05:58 It’s not an alternative,

00:06:00 it’s two different concepts of language.

00:06:02 Different.

00:06:03 It’s a simple fact that there’s something about you,

00:06:07 a trait of yours, part of the organism, you,

00:06:11 that determines that you’re talking English

00:06:14 and not Tagalog, let’s say.

00:06:16 So there is an inner system.

00:06:19 It determines the sound and meaning

00:06:22 of the infinite number of expressions of your language.

00:06:27 It’s localized.

00:06:28 It’s not on your foot, obviously, it’s in your brain.

00:06:31 If you look more closely, it’s in specific configurations

00:06:35 of your brain.

00:06:36 And that’s essentially like the internal structure

00:06:40 of your laptop, whatever programs it has are in there.

00:06:44 Now, one of the things you can do with language,

00:06:47 it’s a marginal thing, in fact,

00:06:50 is use it to externalize what’s in your head.

00:06:54 Actually, most of your use of language

00:06:56 is thought, internal thought.

00:06:58 But you can do what you and I are now doing.

00:07:00 We can externalize it.

00:07:02 Well, the set of things that we’re externalizing

00:07:05 are an external system.

00:07:07 They’re noises in the atmosphere.

00:07:11 And you can call that language

00:07:12 in some other sense of the word.

00:07:14 But it’s not a set of alternatives.

00:07:16 These are just different concepts.

00:07:18 So how deep do the roots of language go in our brain?

00:07:23 Our mind, is it yet another feature like vision,

00:07:26 or is it something more fundamental

00:07:28 from which everything else springs in the human mind?

00:07:31 Well, in a way, it’s like vision.

00:07:33 There’s something about our genetic endowment

00:07:38 that determines that we have a mammalian

00:07:41 rather than an insect visual system.

00:07:44 And there’s something in our genetic endowment

00:07:47 that determines that we have a human language faculty.

00:07:51 No other organism has anything remotely similar.

00:07:55 So in that sense, it’s internal.

00:07:58 Now there is a long tradition,

00:07:59 which I think is valid going back centuries

00:08:03 to the early scientific revolution,

00:08:05 at least that holds that language

00:08:09 is the sort of the core of human cognitive nature.

00:08:13 It’s the source, it’s the mode for constructing thoughts

00:08:18 and expressing them.

00:08:19 That is what forms thought.

00:08:22 And it’s got fundamental creative capacities.

00:08:27 It’s free, independent, unbounded, and so on.

00:08:31 And undoubtedly, I think the basis

00:08:34 for our creative capacities

00:08:38 and the other remarkable human capacities

00:08:43 that lead to the unique achievements

00:08:47 and not so great achievements of the species.

00:08:51 The capacity to think and reason,

00:08:53 do you think that’s deeply linked with language?

00:08:56 Do you think the way we,

00:08:58 the internal language system is essentially the mechanism

00:09:01 by which we also reason internally?

00:09:04 It is undoubtedly the mechanism by which we reason.

00:09:06 There may also be other fact,

00:09:09 there are undoubtedly other faculties involved in reasoning.

00:09:14 We have a kind of scientific faculty,

00:09:17 nobody knows what it is,

00:09:18 but whatever it is that enables us

00:09:20 to pursue certain lines of endeavor and inquiry

00:09:25 and to decide what makes sense and doesn’t make sense

00:09:29 and to achieve a certain degree

00:09:32 of understanding of the world,

00:09:33 that uses language, but goes beyond it.

00:09:37 Just as using our capacity for arithmetic

00:09:42 is not the same as having the capacity.

00:09:44 The idea of capacity, our biology, evolution,

00:09:49 you’ve talked about it defining essentially our capacity,

00:09:52 our limit and our scope.

00:09:55 Can you try to define what limit and scope are?

00:09:58 And the bigger question,

00:10:01 do you think it’s possible to find the limit

00:10:04 of human cognition?

00:10:07 Well, that’s an interesting question.

00:10:09 It’s commonly believed, most scientists believe

00:10:13 that human intelligence can answer any question

00:10:17 in principle.

00:10:19 I think that’s a very strange belief.

00:10:21 If we’re biological organisms,

00:10:24 which are not angels,

00:10:26 then our capacities ought to have scope

00:10:31 and limits which are interrelated.

00:10:34 Can you define those two terms?

00:10:36 Well, let’s take a concrete example.

00:10:40 Your genetic endowment determines

00:10:44 that you can have a male in visual system,

00:10:46 arms and legs and so on,

00:10:49 but it therefore become a rich, complex organism.

00:10:53 But if you look at that same genetic endowment,

00:10:56 it prevents you from developing in other directions.

00:10:59 There’s no kind of experience

00:11:01 which would yield the embryo

00:11:05 to develop an insect visual system

00:11:08 or to develop wings instead of arms.

00:11:11 So the very endowment that confers richness and complexity

00:11:16 also sets bounds on what can be attained.

00:11:23 Now, I assume that our cognitive capacities

00:11:27 are part of the organic world.

00:11:29 Therefore, they should have the same properties.

00:11:32 If they had no built in capacity

00:11:35 to develop a rich and complex structure,

00:11:39 we would understand nothing.

00:11:41 Just as if your genetic endowment

00:11:46 did not compel you to develop arms and legs,

00:11:50 you would just be some kind of random amoeboid creature

00:11:54 with no structure at all.

00:11:56 So I think it’s plausible to assume that there are limits

00:12:00 and I think we even have some evidence as to what they are.

00:12:03 So for example, there’s a classic moment

00:12:06 in the history of science at the time of Newton.

00:12:11 There was a from Galileo to Newton modern science

00:12:15 developed on a fundamental assumption

00:12:17 which Newton also accepted.

00:12:20 Namely that the world is an entire universe

00:12:24 is a mechanical object.

00:12:26 And by mechanical, they meant something like

00:12:29 the kinds of artifacts that were being developed

00:12:31 by skilled artisans all over Europe,

00:12:34 the gears, levers and so on.

00:12:37 And their belief was well,

00:12:39 the world is just a more complex variant of this.

00:12:42 Newton, to his astonishment and distress,

00:12:48 proved that there are no machines,

00:12:50 that there’s interaction without contact.

00:12:54 His contemporaries like Leibniz and Huygens

00:12:57 just dismissed this as returning to the mysticism

00:13:02 of the neo scholastics.

00:13:03 And Newton agreed.

00:13:05 He said it is totally absurd.

00:13:08 No person of any scientific intelligence

00:13:11 could ever accept this for a moment.

00:13:13 In fact, he spent the rest of his life

00:13:15 trying to get around it somehow,

00:13:17 as did many other scientists.

00:13:20 That was the very criterion of intelligibility

00:13:24 for say Galileo or Newton.

00:13:27 Theory did not produce an intelligible world

00:13:31 unless you could duplicate it in a machine.

00:13:34 He showed you can’t, there are no machines, any.

00:13:37 Finally, after a long struggle, took a long time,

00:13:41 scientists just accepted this as common sense.

00:13:45 But that’s a significant moment.

00:13:47 That means they abandoned the search

00:13:49 for an intelligible world.

00:13:51 And the great philosophers of the time

00:13:54 understood that very well.

00:13:57 So for example, David Hume in his encomium to Newton

00:14:02 wrote that who was the greatest thinker ever and so on.

00:14:05 He said that he unveiled many of the secrets of nature,

00:14:10 but by showing the imperfections

00:14:13 of the mechanical philosophy, mechanical science,

00:14:17 he left us with, he showed that there are mysteries

00:14:21 which ever will remain.

00:14:23 And science just changed its goals.

00:14:26 It abandoned the mysteries.

00:14:28 It can’t solve it, we’ll put it aside.

00:14:31 We only look for intelligible theories.

00:14:34 Newton’s theories were intelligible.

00:14:36 It’s just what they described wasn’t.

00:14:39 Well, Locke said the same thing.

00:14:42 I think they’re basically right.

00:14:44 And if so, that showed something

00:14:47 about the limits of human cognition.

00:14:49 We cannot attain the goal of understanding the world,

00:14:55 of finding an intelligible world.

00:14:58 This mechanical philosophy Galileo to Newton,

00:15:02 there’s a good case that can be made

00:15:05 that that’s our instinctive conception of how things work.

00:15:10 So if say infants are tested with things that,

00:15:16 if this moves and then this moves,

00:15:18 they kind of invent something that must be invisible

00:15:22 that’s in between them that’s making them move and so on.

00:15:24 Yeah, we like physical contact.

00:15:26 Something about our brain seeks.

00:15:28 Makes us want a world like that.

00:15:31 Just like it wants a world

00:15:32 that has regular geometric figures.

00:15:36 So for example, Descartes pointed this out

00:15:38 that if you have an infant

00:15:41 who’s never seen a triangle before and you draw a triangle,

00:15:47 the infant will see a distorted triangle,

00:15:52 not whatever crazy figure it actually is.

00:15:56 Three lines not coming quite together,

00:15:58 one of them a little bit curved and so on.

00:16:00 We just impose a conception of the world

00:16:04 in terms of geometric, perfect geometric objects.

00:16:09 It’s now been shown that goes way beyond that.

00:16:12 That if you show on a tachistoscope,

00:16:15 let’s say a couple of lights shining,

00:16:18 you do it three or four times in a row.

00:16:20 What people actually see is a rigid object in motion,

00:16:25 not whatever’s there.

00:16:26 We all know that from a television set basically.

00:16:31 So that gives us hints of potential limits

00:16:34 to our cognition.

00:16:35 I think it does, but it’s a very contested view.

00:16:39 If you do a poll among scientists,

00:16:42 it’s impossible we can understand anything.

00:16:46 Let me ask and give me a chance with this.

00:16:48 So I just spent a day at a company called Neuralink

00:16:52 and what they do is try to design

00:16:56 what’s called the brain machine, brain computer interface.

00:16:59 So they try to do thousands readings in the brain,

00:17:03 be able to read what the neurons are firing

00:17:05 and then stimulate back, so two way.

00:17:08 Do you think their dream is to expand the capacity

00:17:12 of the brain to attain information,

00:17:16 sort of increase the bandwidth

00:17:18 of which we can search Google kind of thing?

00:17:22 Do you think our cognitive capacity might be expanded

00:17:26 our linguistic capacity, our ability to reason

00:17:29 might be expanded by adding a machine into the picture?

00:17:33 Can be expanded in a certain sense,

00:17:35 but a sense that was known thousands of years ago.

00:17:39 A book expands your cognitive capacity.

00:17:43 Okay, so this could expand it too.

00:17:46 But it’s not a fundamental expansion.

00:17:47 It’s not totally new things could be understood.

00:17:50 Well, nothing that goes beyond

00:17:53 their native cognitive capacities.

00:17:56 Just like you can’t turn the visual system

00:17:58 into an insect system.

00:18:00 Well, I mean, the thought is,

00:18:04 the thought is perhaps you can’t directly,

00:18:06 but you can map sort of.

00:18:08 You couldn’t, but we already,

00:18:10 we know that without this experiment.

00:18:12 You could map what a bee sees and present it in a form

00:18:16 so that we could follow it.

00:18:17 In fact, every bee scientist does that.

00:18:19 But you don’t think there’s something greater than bees

00:18:25 that we can map and then all of a sudden discover something,

00:18:29 be able to understand a quantum world, quantum mechanics,

00:18:33 be able to start to be able to make sense.

00:18:35 Students at MIT study and understand quantum mechanics.

00:18:41 But they always reduce it to the infant, the physical.

00:18:45 I mean, they don’t really understand.

00:18:46 Oh, you don’t, there’s thing, that may be another area

00:18:50 where there’s just a limit to understanding.

00:18:52 We understand the theories,

00:18:54 but the world that it describes doesn’t make any sense.

00:18:58 So, you know, the experiment, Schrodinger’s cat,

00:19:01 for example, can understand the theory,

00:19:03 but as Schrodinger pointed out,

00:19:05 it’s an unintelligible world.

00:19:09 One of the reasons why Einstein

00:19:11 was always very skeptical about quantum theory,

00:19:14 was that he described himself as a classical realist,

00:19:19 in one’s intelligibility.

00:19:23 He has something in common with infants in that way.

00:19:27 So, back to linguistics.

00:19:30 If you could humor me, what are the most beautiful

00:19:34 or fascinating aspects of language

00:19:36 or ideas in linguistics or cognitive science

00:19:38 that you’ve seen in a lifetime of studying language

00:19:42 and studying the human mind?

00:19:44 Well, I think the deepest property of language

00:19:50 and puzzling property that’s been discovered

00:19:52 is what is sometimes called structure dependence.

00:19:57 We now understand it pretty well,

00:19:59 but it was puzzling for a long time.

00:20:01 I’ll give you a concrete example.

00:20:03 So, suppose you say the guy who fixed the car

00:20:09 carefully packed his tools, it’s ambiguous.

00:20:13 He could fix the car carefully or carefully pack his tools.

00:20:17 Suppose you put carefully in front,

00:20:21 carefully the guy who fixed the car packed his tools,

00:20:25 then it’s carefully packed, not carefully fixed.

00:20:29 And in fact, you do that even if it makes no sense.

00:20:32 So, suppose you say carefully,

00:20:34 the guy who fixed the car is tall.

00:20:39 You have to interpret it as carefully he’s tall,

00:20:41 even though that doesn’t make any sense.

00:20:44 And notice that that’s a very puzzling fact

00:20:47 because you’re relating carefully

00:20:50 not to the linearly closest verb,

00:20:53 but to the linearly more remote verb.

00:20:57 A linear closeness is an easy computation,

00:21:02 but here you’re doing a much more,

00:21:03 what looks like a more complex computation.

00:21:06 You’re doing something that’s taking you essentially

00:21:10 to the more remote thing.

00:21:13 It’s now, if you look at the actual structure

00:21:16 of the sentence, where the phrases are and so on,

00:21:20 turns out you’re picking out the structurally closest thing,

00:21:24 but the linearly more remote thing.

00:21:27 But notice that what’s linear is 100% of what you hear.

00:21:32 You never hear structure, can’t.

00:21:35 So, what you’re doing is,

00:21:37 and certainly this is universal, all constructions,

00:21:40 all languages, and what we’re compelled to do

00:21:44 is carry out what looks like the more complex computation

00:21:48 on material that we never hear,

00:21:52 and we ignore 100% of what we hear

00:21:55 and the simplest computation.

00:21:57 By now, there’s even a neural basis for this

00:22:00 that’s somewhat understood,

00:22:02 and there’s good theories by now

00:22:04 that explain why it’s true.

00:22:06 That’s a deep insight into the surprising nature of language

00:22:11 with many consequences.

00:22:14 Let me ask you about a field of machine learning,

00:22:17 deep learning.

00:22:18 There’s been a lot of progress in neural networks based,

00:22:22 neural network based machine learning in the recent decade.

00:22:26 Of course, neural network research goes back many decades.

00:22:30 What do you think are the limits of deep learning,

00:22:35 of neural network based machine learning?

00:22:38 Well, to give a real answer to that,

00:22:41 you’d have to understand the exact processes

00:22:44 that are taking place, and those are pretty opaque.

00:22:47 So, it’s pretty hard to prove a theorem

00:22:50 about what can be done and what can’t be done,

00:22:54 but I think it’s reasonably clear.

00:22:56 I mean, putting technicalities aside,

00:22:59 what deep learning is doing

00:23:01 is taking huge numbers of examples

00:23:05 and finding some patterns.

00:23:07 Okay, that could be interesting in some areas it is,

00:23:11 but we have to ask here a certain question.

00:23:15 Is it engineering or is it science?

00:23:18 Engineering in the sense of just trying

00:23:20 to build something that’s useful,

00:23:22 or science in the sense that it’s trying

00:23:24 to understand something about elements of the world.

00:23:28 So, take, say, a Google parser.

00:23:31 We can ask that question.

00:23:33 Is it useful, yeah, it’s pretty useful.

00:23:36 I use a Google translator, so on engineering grounds,

00:23:41 it’s kind of worth having, like a bulldozer.

00:23:45 Does it tell you anything about human language?

00:23:48 Zero, nothing, and in fact, it’s very striking.

00:23:54 From the very beginning,

00:23:56 it’s just totally remote from science.

00:24:00 So, what is a Google parser doing?

00:24:02 It’s taking an enormous text,

00:24:05 let’s say the Wall Street Journal corpus,

00:24:07 and asking how close can we come

00:24:10 to getting the right description

00:24:14 of every sentence in the corpus.

00:24:16 Well, every sentence in the corpus

00:24:18 is essentially an experiment.

00:24:21 Each sentence that you produce is an experiment

00:24:24 which says, am I a grammatical sentence?

00:24:27 The answer is usually yes.

00:24:29 So, most of the stuff in the corpus

00:24:31 is grammatical sentences.

00:24:33 But now, ask yourself, is there any science

00:24:36 which takes random experiments

00:24:40 which are carried out for no reason whatsoever

00:24:43 and tries to find out something from them?

00:24:46 Like if you’re, say, a chemistry PhD student,

00:24:49 you wanna get a thesis, can you say,

00:24:51 well, I’m just gonna mix a lot of things together,

00:24:55 no purpose, and maybe I’ll find something.

00:24:59 You’d be laughed out of the department.

00:25:02 Science tries to find critical experiments,

00:25:06 ones that answer some theoretical question.

00:25:09 Doesn’t care about coverage of millions of experiments.

00:25:12 So, it just begins by being very remote from science

00:25:16 and it continues like that.

00:25:18 So, the usual question that’s asked about,

00:25:21 say, a Google parser is how well does it do,

00:25:25 or some parser, how well does it do on a corpus?

00:25:28 But there’s another question that’s never asked.

00:25:31 How well does it do on something

00:25:32 that violates all the rules of language?

00:25:36 So, for example, take the structure dependence case

00:25:38 that I mentioned.

00:25:39 Suppose there was a language

00:25:41 in which you used linear proximity

00:25:45 as the mode of interpretation.

00:25:49 These deep learning would work very easily on that.

00:25:51 In fact, much more easily on an actual language.

00:25:54 Is that a success?

00:25:55 No, that’s a failure from a scientific point of view.

00:25:59 It’s a failure.

00:26:00 It shows that we’re not discovering

00:26:03 the nature of the system at all,

00:26:05 because it does just as well or even better

00:26:07 on things that violate the structure of the system.

00:26:10 And it goes on from there.

00:26:12 It’s not an argument against doing it.

00:26:14 It is useful to have devices like this.

00:26:17 So, yes, so neural networks are kind of approximators

00:26:20 that look, there’s echoes of the behavioral debates, right?

00:26:24 Behavioralism.

00:26:26 More than echoes.

00:26:27 Many of the people in deep learning

00:26:30 say they’ve vindicated Terry Sanyosky, for example,

00:26:34 in his recent books,

00:26:35 as this vindicates Skinnerian behaviors.

00:26:39 It doesn’t have anything to do with it.

00:26:41 Yes, but I think there’s something

00:26:44 actually fundamentally different

00:26:46 when the data set is huge.

00:26:48 But your point is extremely well taken.

00:26:51 But do you think we can learn, approximate

00:26:55 that interesting complex structure of language

00:26:58 with neural networks

00:27:00 that will somehow help us understand the science?

00:27:03 It’s possible.

00:27:04 I mean, you find patterns that you hadn’t noticed,

00:27:07 let’s say, could be.

00:27:09 In fact, it’s very much like a kind of linguistics

00:27:13 that’s done, what’s called corpus linguistics.

00:27:18 When you, suppose you have some language

00:27:21 where all the speakers have died out,

00:27:23 but you have records.

00:27:25 So you just look at the records

00:27:28 and see what you can figure out from that.

00:27:30 It’s much better than,

00:27:31 it’s much better to have actual speakers

00:27:33 where you can do critical experiments.

00:27:36 But if they’re all dead, you can’t do them.

00:27:38 So you have to try to see what you can find out

00:27:40 from just looking at the data that’s around.

00:27:43 You can learn things.

00:27:45 Actually, paleoanthropology is very much like that.

00:27:48 You can’t do a critical experiment on

00:27:51 what happened two million years ago.

00:27:53 So you’re kind of forced just to take what data’s around

00:27:56 and see what you can figure out from it.

00:27:59 Okay, it’s a serious study.

00:28:01 So let me venture into another whole body of work

00:28:05 and philosophical question.

00:28:08 You’ve said that evil in society arises from institutions,

00:28:13 not inherently from our nature.

00:28:15 Do you think most human beings are good,

00:28:17 they have good intent?

00:28:19 Or do most have the capacity for intentional evil

00:28:22 that depends on their upbringing,

00:28:24 depends on their environment, on context?

00:28:27 I wouldn’t say that they don’t arise from our nature.

00:28:30 Anything we do arises from our nature.

00:28:34 And the fact that we have certain institutions, not others,

00:28:38 is one mode in which human nature has expressed itself.

00:28:43 But as far as we know,

00:28:45 human nature could yield many different kinds

00:28:48 of institutions.

00:28:50 The particular ones that have developed

00:28:53 have to do with historical contingency,

00:28:56 who conquered whom, and that sort of thing.

00:29:00 They’re not rooted in our nature

00:29:03 in the sense that they’re essential to our nature.

00:29:06 So it’s commonly argued that these days

00:29:10 that something like market systems

00:29:12 is just part of our nature.

00:29:15 But we know from a huge amount of evidence

00:29:18 that that’s not true.

00:29:19 There’s all kinds of other structures.

00:29:21 It’s a particular fact of a moment of modern history.

00:29:26 Others have argued that the roots of classical liberalism

00:29:30 actually argue that what’s called sometimes

00:29:34 an instinct for freedom,

00:29:36 the instinct to be free of domination

00:29:39 by illegitimate authority is the core of our nature.

00:29:43 That would be the opposite of this.

00:29:45 And we don’t know.

00:29:47 We just know that human nature can accommodate both kinds.

00:29:52 If you look back at your life,

00:29:54 is there a moment in your intellectual life

00:29:58 or life in general that jumps from memory

00:30:00 that brought you happiness

00:30:02 that you would love to relive again?

00:30:05 Sure.

00:30:06 Falling in love, having children.

00:30:10 What about, so you have put forward into the world

00:30:13 a lot of incredible ideas in linguistics,

00:30:17 in cognitive science, in terms of ideas

00:30:22 that just excites you when it first came to you

00:30:26 that you would love to relive those moments.

00:30:28 Well, I mean, when you make a discovery

00:30:32 about something that’s exciting,

00:30:34 like, say, even the observation of structure dependence

00:30:40 and on from that, the explanation for it.

00:30:44 But the major things just seem like common sense.

00:30:49 So if you go back to take your question

00:30:53 about external and internal language,

00:30:55 you go back to, say, the 1950s,

00:30:59 almost entirely languages regarded an external object,

00:31:03 something outside the mind.

00:31:06 It just seemed obvious that that can’t be true.

00:31:10 Like I said, there’s something about you

00:31:13 that determines you’re talking English,

00:31:15 not Swahili or something.

00:31:18 But that’s not really a discovery.

00:31:20 That’s just an observation, what’s transparent.

00:31:24 You might say it’s kind of like the 17th century,

00:31:30 the beginnings of modern science, 17th century.

00:31:33 They came from being willing to be puzzled

00:31:37 about things that seemed obvious.

00:31:40 So it seems obvious that a heavy ball of lead

00:31:44 will fall faster than a light ball of lead.

00:31:47 But Galileo was not impressed by the fact

00:31:50 that it seemed obvious.

00:31:52 So he wanted to know if it’s true.

00:31:54 They carried out experiments, actually thought experiments,

00:31:59 never actually carried them out,

00:32:01 which that can’t be true.

00:32:04 And out of things like that, observations of that kind,

00:32:11 why does a ball fall to the ground instead of rising,

00:32:15 let’s say, seems obvious, till you start thinking about it,

00:32:20 because why does steam rise, let’s say.

00:32:23 And I think the beginnings of modern linguistics,

00:32:27 roughly in the 50s, are kind of like that,

00:32:30 just being willing to be puzzled about phenomena

00:32:33 that looked, from some point of view, obvious.

00:32:38 And for example, a kind of doctrine,

00:32:41 almost official doctrine of structural linguistics

00:32:44 in the 50s was that languages can differ

00:32:49 from one another in arbitrary ways,

00:32:52 and each one has to be studied on its own

00:32:56 without any presuppositions.

00:32:58 In fact, there were similar views among biologists

00:33:02 about the nature of organisms, that each one’s,

00:33:05 they’re so different when you look at them

00:33:07 that almost anything, you could be almost anything.

00:33:10 Well, in both domains, it’s been learned

00:33:13 that that’s very far from true.

00:33:15 There are narrow constraints on what could be an organism

00:33:18 or what could be a language.

00:33:21 But these are, that’s just the nature of inquiry.

00:33:25 Inquiry. Science in general, yeah, inquiry.

00:33:29 So one of the peculiar things about us human beings

00:33:33 is our mortality.

00:33:35 Ernest Becker explored it in general.

00:33:38 Do you ponder the value of mortality?

00:33:40 Do you think about your own mortality?

00:33:43 I used to when I was about 12 years old.

00:33:48 I wondered, I didn’t care much about my own mortality,

00:33:51 but I was worried about the fact that

00:33:54 if my consciousness disappeared,

00:33:57 would the entire universe disappear?

00:34:00 That was frightening.

00:34:01 Did you ever find an answer to that question?

00:34:03 No, nobody’s ever found an answer,

00:34:05 but I stopped being bothered by it.

00:34:07 It’s kind of like Woody Allen in one of his films,

00:34:10 you may recall, he starts, he goes to a shrink

00:34:14 when he’s a child and the shrink asks him,

00:34:16 what’s your problem?

00:34:17 He says, I just learned that the universe is expanding.

00:34:21 I can’t handle that.

00:34:22 And then another absurd question is,

00:34:27 what do you think is the meaning of our existence here,

00:34:32 our life on Earth, our brief little moment in time?

00:34:35 That’s something we answer by our own activities.

00:34:40 There’s no general answer.

00:34:42 We determine what the meaning of it is.

00:34:46 The action determine the meaning.

00:34:48 Meaning in the sense of significance,

00:34:50 not meaning in the sense that chair means this.

00:34:55 But the significance of your life is something you create.

00:35:01 No, thank you so much for talking to me today.

00:35:02 It was a huge honor.

00:35:04 Thank you so much.

00:35:05 Thanks for listening to this conversation with Noah Chomsky

00:35:08 and thank you to our presenting sponsor, Cash App.

00:35:11 Download it, use code LexPodcast, you’ll get $10

00:35:16 and $10 will go to FIRST, a STEM education nonprofit

00:35:19 that inspires hundreds of thousands of young minds

00:35:22 to learn and to dream of engineering our future.

00:35:25 If you enjoy this podcast, subscribe on YouTube,

00:35:28 give it five stars on Apple Podcast, support on Patreon,

00:35:32 or connect with me on Twitter.

00:35:34 Thank you for listening and hope to see you next time.