How language gives your brain a break | MIT News
Study: In dozens of languages, words that work together stay together.
Here’s a quick task: Take a look at the sentences below and decide which is the most effective.
(1) “John threw out the old trash sitting in the kitchen.”
(2) “John threw the old trash sitting in the kitchen out.”
Either sentence is grammatically acceptable, but you probably found the first one to be more natural. Why? Perhaps because of the placement of the word “out,” which seems to fit better in the middle of this word sequence than the end.
In technical terms, the first sentence has a shorter “dependency length” — a shorter total distance, in words, between the crucial elements of a sentence. Now a new study of 37 languages by three MIT researchers has shown that most languages move toward “dependency length minimization” (DLM) in practice. That means language users have a global preference for more locally grouped dependent words, whenever possible.
“People want words that are related to each other in a sentence to be close together,” says Richard Futrell, a PhD student in the Department of Brain and Cognitive Sciences at MIT, and a lead author of a new paper detailing the results. “There is this idea that the distance between grammatically related words in a sentence should be short, as a principle.”
The paper, published this week in the Proceedings of the National Academy of Sciences, suggests people modify language in this way because it makes things simpler for our minds — as speakers, listeners, and readers.
“When I’m talking to you, and you’re trying to understand what I’m saying, you have to parse it, and figure out which words are related to each other,” Futrell observes. “If there is a large amount of time between one word and another related word, that means you have to hold one of those words in memory, and that can be hard to do.”
While the existence of DLM had previously been posited and identified in a couple of languages, this is the largest study of its kind to date.
“It was pretty interesting, because people had really only looked at it in one or two languages,” says Edward Gibson, a professor of cognitive science and co-author of the paper. “We thought it was probably true [more widely], but that’s pretty important to show. … We’re not showing perfect optimization, but [DLM] is a factor that’s involved.”
From head to tail
To conduct the study, the researchers used four large databases of sentences that have been parsed grammatically: one from Charles University in Prague, one from Google, one from the Universal Dependencies Consortium (a new group of computational linguists), and a Chinese-language database from the Linguistic Dependencies Consortium at the University of Pennsylvania. The sentences are taken from published texts, and thus represent everyday language use.
To quantify the effect of placing related words closer to each other, the researchers compared the dependency lengths of the sentences to a couple of baselines for dependency length in each language. One baseline randomizes the distance between each “head” word in a sentence (such as “threw,” above) and the “dependent” words (such as “out”). However, since some languages, including English, have relatively strict word-order rules, the researchers also used a second baseline that accounted for the effects of those word-order relationships.
In both cases, Futrell, Gibson, and co-author Kyle Mahowald found, the DLM tendency exists, to varying degrees, among languages. Italian appears to be highly optimized for short sentences; German, which has some notoriously indirect sentence constructions, is far less optimized, according to the analysis.
And the researchers also discovered that “head-final” languages such as Japanese, Korean, and Turkish, where the head word comes last, show less length minimization than is typical. This could be because these languages have extensive case-markings, which denote the function of a word (whether a noun is the subject, the direct object, and so on). The case markings would thus compensate for the potential confusion of the larger dependency lengths.
“It’s possible, in languages where it’s really obvious from the case marking where the word fits into the sentence, that might mean it’s less important to keep the dependencies local,” Futrell says.
Other scholars who have done research on this topic say the study provides valuable new information.
“It’s interesting and exciting work,” says David Temperley, a professor at the University of Rochester, who along with his Rochester colleague Daniel Gildea has co-authored a study comparing dependency length in English and German. “We wondered how general this phenomenon would turn out to be.”
Futrell, Gibson, and Mahowald readily note that the study leaves larger questions open: Does the DLM tendency occur primarily to help the production of language, its reception, a more strictly cognitive function, or all of the above?
“It could be for the speaker, the listener, or both,” Gibson says. “It’s very difficult to separate those.”
What about German? (or is that why Mark Twain claimed the language was insane?)
"the distance between grammatically related words..."
'threw' and 'out' are not grammatically related (although they are grammatically connected, in this instance). They are pragmatically related.
You gotto love linguistics. The idea that the structure of a sentence makes it more or less efective to process is an old one and belongs, I think, to the class of 'functional explanations'. A lot of functional explanations thrown together make for 'functional grammar'. One decade it's the idea of the century, the next decade it's considered total junk science. To be a good linguïst, you mainly have to be OLD and have seen many theories.
Perhaps that is why Latin so popular in schools no longer is.
Really interesting finding. This got me also wondering: is this related to information density of sentences? Do people prefer language structures which can express most information in smallest size? Or maybe this is caused by cognitive load: sentences that cause minimal cognitive load are perceived to be more effective.
that means you have to listen to a German speaking until they finish, i think that's a positive
Ending-a-sentence-in-a-preposition Land is already a place we're taught not to visit. While your findings about linguistics are very interesting, the example isn't very strong.
So, was John or the trash sitting in the kitchen, when it got thrown out? Plainly it wasn't the trash (because it is not then sitting) but John. So, the second example is clearer to me.
Given modern attention spans, long sentences, much less ones that end with a preposition, are treading dangerously. Oh, for the days of paragraph-long sentences by Henry James!
I had not heard of dependency length minimization (DLM). But I have often struggled with the problem in my own writing or should I say "I have often struggled in my own writing with the problem?"
Now that I have heard the assigned name of the concept, I suggest it be called dependency length optimization (DLO), as minimization does not cover all aspects of the problem. In the example given above I prefer "John threw out the old trash that was sitting in the kitchen."
That's linguistic xenophobia.
At least, it is very simplistic.
In a given language, a sentence, with a preposition at its end, will deliver an information before this preposition, for example the object; in an other language (or the same one used in an alternative way) the sentence will deliver at first the total description of the action done on the object.
No reason to ramble on, or to pollute the world with another initials: DLM? Frankly:)
Do not work on SETI, on Cetaceans, or things of this kind: with such an essay on the difference between the human languages, you are already flooded.
There's an intriguing parallel here between this MIT study and a similar study in cognitive psyc. (known as "The Nun Study") that might be worth looking into; the study found that "Low 'idea density' and low grammatical complexity in autobiographies written in early life were associated with low cognitive test scores in late life." Additionally, "among the 14 [subjects] who died, neuropathologically confirmed Alzheimer's disease was present in all of those with low idea density in early life and in none of those with high idea density."
[ Journal Ref: http://jama.jamanetwork.com/ar... ][ See also: http://itre.cis.upenn.edu/~myl... or https://en.wikipedia.org/wiki/... ]
in the case of the people of multilingual ( indio Arayan, Aryan, Indo Europian and afro Asiatic) the word, syntax and the vocabulary derivation are being held by the automatic memmory and its transaltion. for example if a multilingual person he thinks of a word in his mothertonge ( it should be held in mothertonge ) then he wanted to find out alternative language word, then he dependes on his hearoautomic memory, his brain pursues the scan to find out the word which is already recorded in his brain by hearing. its not only in the case of human, but also it is alredy seen in birds, dolphin and dogs
Sorry to be dismissive, but I think this is all a bit simplistic. German, as has been pointed out, is not "optimized" for DLM... yet has had centuries to evolve to be so.
Much more interesting to me is how languages handle how, when and where.
1) I am going to the beach by car tomorrow.2) I am going tomorrow by car to the beach.
Same "DLM"? but that first one sounds better than the second to English native speakers. But in Dutch, the latter would be the preferred word order. Hmmmm....
Reprinted with permission of MIT News