麻豆视频

Discovery challenges assumptions about the structure of language

Every time we speak, we鈥檙e improvising. 

鈥淗umans possess a remarkable ability to talk about almost anything, sometimes putting words together into never-before-spoken or -written sentences,鈥 said , the William R. Kenan, Jr. Professor of Psychology in the 麻豆视频 and 麻豆视频. 

We can improvise new sentences so readily, language scientists believe, because we have acquired mental representations of the patterns of language that allow us to combine words into sentences. The nature of those patterns and how they work, however, remains a puzzle in cognitive science, Christiansen said.

In new research, Christiansen and co-author Yngwie A. Nielsen of Aarhus University offer a new perspective on those mental representations, challenging the long-standing assumption in the language sciences that they consist of highly complex syntactic structures. The study focused on English, but the researchers are optimistic that its findings hold across languages, and could reshape how we understand language evolution, language development, and second-language education. 

For decades, scientists have believed we rely on a complex mental grammar to build sentences that have hierarchically organized structure 鈥 like a branching tree. But Christiansen and Nielsen suggest that our mental representations might be more like snapping together pre-assembled LEGO pieces (such as a door frame or a wheel set) into a complete model. Instead of intricate hierarchies, they propose, we use small, linear chunks of word classes like nouns and verbs 鈥 including short sequences that can鈥檛 be formed by way of grammar, such as 鈥渋n the middle of the鈥 or 鈥渨ondered if you.鈥

Their study, 鈥淓vidence for the Representation of Non-Hierarchical Structures in Language,鈥 was on Jan. 21. The journal featured the study in a

The prevailing theory since at least the 1950s is based on hierarchical, tree-like mental representations, setting humans apart from other animals, Christiansen said. In this view, words and phrases combine according to the principles of grammar into larger units called constituents. For example, in the sentence 鈥淪he ate the cake,鈥 鈥渢he鈥 and 鈥渃ake鈥 combine into a noun phrase 鈥渢he cake鈥, which then combines with 鈥渁te鈥 into the verb phrase 鈥渁te the cake,鈥 and finally with 鈥渟he鈥 to make the sentence. 

鈥淏ut not all sequences of words form constituents,鈥 Christiansen and Nielsen wrote in a summary of their paper. 鈥淚n fact, the most common three- or four-word sequences in language are often nonconstituents, such as 鈥榗an I have a鈥 or 鈥榠t was in the.鈥欌

Because they don鈥檛 conform to grammar, nonconstituent sequences have been overlooked. But they do play a role in a speaker鈥檚 knowledge of their language, the researchers found.

In experiments, an eye-tracking study and an analysis of phone conversations, they discovered that linear sequences of word classes can be 鈥減rimed,鈥 meaning when we hear or read them once, we process them faster the next time. That鈥檚 compelling evidence they鈥檙e part of our mental representation of language, Christiansen said. In other words, they鈥檙e a key part of our mental representation of language that goes beyond the rules of grammar. 

鈥淚 think the main contribution is showing that traditional rules of grammar cannot capture all of the mental representations of language structure,鈥 Nielsen said. 

鈥淚t might even be possible to account for how we use language in general with flatter structure,鈥 Christiansen said. 鈥淚mportantly, if you don鈥檛 need the more complex machinery of hierarchical syntax, then this could mean that the gulf between human language and other animal communication systems is much smaller than previously thought.鈥 

More News from A&S

Two people sitting on a couch, conversing
Samsung UK/Unsplash Humans possess a remarkable ability to talk about almost anything, readily improvising new sentences 鈥撀燽ut how?