Elizabeth Svoboda, for Nautilus:
[Computational linguist Simon] Kirby took a unique approach to probing the origins of language: He taught human participants novel languages he had made up. He and his colleagues showed human subjects cards with different shapes and pictures on them, taught them the words for these pictures, and tested them. “Whatever they do, whether they get it right or wrong, we teach it to the next person,” Kirby says. “It’s rather like the game Telephone.”
Remarkably, as the language passed from one learner to the next, it began to acquire cogent structure. After 10 generations, the language had changed to make it easier for human speakers to process. Most notably, it began to show “compositionality,” meaning that parts of words corresponded to their meaning—shapes with four sides, for instance, might all have a prefix like “ikeke.” Thanks to these predictable properties, learners developed a mental framework they could easily fit new words into. “Participants not only learn everything we show them,” Kirby says, “but they can correctly guess words we didn’t even train them on.”
Kirby realized that this process of iterated learning—which depended on brain function but extended beyond it—went a long way toward explaining where language structure came from. Having watched in the lab as ordered languages appeared, he’s skeptical when he sees colleagues get entrenched in purely biological explanations for language’s origins. “There’s been this assumption that brain and behavior are related very simply, but languages emerge out of huge populations of socially embedded agents. The problem with ‘gene for x’ or ‘grammar module y’ is they ignore how something that is the property of an individual is linked to something that is the property of a community.”
I like the idea that multi-generational social networks are an extension of our own neural networks. Also that we make language our own, based not just on basic morphology, but on social mores.