A 'language universal' is the title given to a property that ties together all of the languages and their variations that are spoken in the world today.
In a sense, it's a universal concept that we all use in our speech, no matter what language we speak. Because of the sheer number of languages in the world, which currently stands at a rough 6,500, these language universals are few and far between.
That's why it came as a surprise when scientists from the Massachusetts Institute of Technology(MIT) published a paper in the latest edition of Proceedings of the National Academy of Science(PNAS) claiming to have found one.
The professors state that all languages self-organise in a particular way that sees related concepts stay as close as possible to each other within a sentence. An example of this would be the fact that an adjective needs to be kept in close proximity to the noun which it is altering. For instance, 'old lady' needs the adjective 'old' kept close its noun 'lady' so it's easier for those listening to understand the concept that the lady is old.
Another example would be the sentence “John threw out the old trash sitting in the kitchen”, which makes more sense to the human ear than “John threw the old trash sitting in the kitchen out", despite both of them actually making sense. This concept is known as dependency length minimisation(DLM).
The professors set out to score 37 popular languages based on the proximity of the words which should technically be related to each other. They compared each language to a baseline score, hoping to find a relatively stable graph, meaning DLM played a major role in all the languages.
That's exactly what they found, which could go a long way to proving that some element of language is genetically predetermined in humans, and that we have specific brain architecture dedicated to language.That would mean language isn't just a result of the capabilities of the human brain, it's a cognitive package we're born with as a species.