Researchers have confirmed every society on the planet makes music and it is used in “strikingly similar ways,” from lullabies to love songs.
“Our article finds universal patterns in vocal music — both the social contexts in which it occurs and the auditory structure of the song,” said Dean Knox, a computational social scientist and assistant professor of politics at Princeton. “These patterns are similar across hundreds of small-scale societies. Among other results, we show that machine-learning techniques can reliably recognize the social function of a song — i.e. dance songs, healing songs, love songs, lullabies — even without knowing anything about the culture or region that created the song, based only on patterns learned from other societies.” Their paper appears today in the journal Science.
The research team included psychologists, anthropologists, biologists, musicians, linguists and other experts from 11 institutions on three continents, including Harvard University, Victoria University of Wellington in New Zealand, the University of Rochester’s Eastman School of Music, the Max Planck Institute for Empirical Aesthetics in Germany, McGill University in Canada, as well as two political scientists to manage the extraordinary data set: Knox and his graduate school roommate, Christopher Lucas, now an assistant professor of political science at Washington University in St. Louis.
“I guess you wouldn’t imagine political scientists on the team to analyze music – but here that’s one of our bread-and-butter things,” Lucas said. “Everything we do can be analyzed as data. Twenty years ago in political science, you studied polls and votes. Today, we study the way politicians talk, the audio recordings, the way people are affected. … The tools we’ve developed to study that in politics have a lot of punch when applied elsewhere, too.”
The 19-person team set out to answer big questions: Is music a cultural universal? If it is, which musical qualities overlap across disparate societies? If it isn’t, why does it seem so ubiquitous?
To answer these questions, they needed a dataset of unprecedented breadth and depth. Over a five-year period, the team hunted down hundreds of recordings in libraries and private collections of scientists half a world away.
“We are so used to being able to find any piece of music that we like on the internet,” said Samuel Mehr, a principal investigator at Harvard’s Music Lab and the first author on the paper. “But there are thousands and thousands of recordings buried in archives that are not accessible online. We didn’t know what we would find: at one point we found an odd-looking call number, asked a Harvard librarian for help, and 20 minutes later she wheeled out a cart of about 20 cases of reel-to-reel recordings of traditional Celtic music.”
The research team ultimately examined 315 societies across the planet, for which all but six were found in ethnographic documents listed in the Human Relations Area Files organization. They collected around 5,000 descriptions of song from a subset of 60 cultures spanning 30 distinct geographic regions. For the discography, they collected 118 songs from a total of 86 cultures, again covering 30 geographic regions.
Their deep dive into song helped to create the National History of Song (NHS) Ethnography, for which they coded dozens of variables. The researchers logged details about singers and audience members, time of day, duration of singing, presence of instruments, and more details for thousands of passages about songs in the ethnographic corpus. The discography was analyzed four different ways: machine summaries, listener ratings, expert annotations and expert transcriptions.
“It’s a significant project,” Lucas said. “Exceptionally difficult. The data collection was very big. For the analysis, about 10,000 lines of codes were written. Dean and I were in there, doing the actual statistical analysis of the paper.” Knox and Lucas compiled and crunched data, including nearly 500,000 words gleaned from song descriptions across those 315 societies and codifying those so each society had a median group of nearly 50 songs to examine.
Their big questions led to one big answer: Music pervades social life in similar ways all around the world.
“Scholars have made sweeping claims about the universality of music and music-related behavior, but these claims are incredibly hard to test,” Knox said. “We were working with unstructured audio recordings and textual ethnographic descriptions of song, not the simple numeric data that most analysts are used to. The question is, how can we empirically evaluate these ideas with messy and complex data? For example, every ethnographer has their own biases about what to describe, so we had to think carefully about the right way to use their accounts to draw statistically principled conclusions about patterns in music.”
They found that, across societies, music is associated with behaviors such as infant care (lullabies), healing, dance, love, mourning, warfare, processions, ritual — and that these behaviors are not terribly different from society to society. They discovered that music for those universal behaviors tended to have similar musical features, which Knox and Lucas were able to train a computer to recognize.
“One of our most surprising results is that machines, which have no knowledge of human psychology or music theory, can be trained to recognize lullabies, healing songs, love songs and dance songs — even in cultures that they’ve never seen before,” Knox said.
For Mehr, the study is a first step toward unlocking the governing rules of “musical grammar.” That idea has been percolating among music theorists, linguists and psychologists of music for decades, but had never been demonstrated across cultures.
“In music theory, tonality is often assumed to be an invention of Western music, but our data raise the controversial possibility that this could be a universal feature of music,” Mehr said. “That raises pressing questions about structure that underlies music everywhere — and whether and how our minds are designed to make music.”
For Knox, the most exciting part is that machines are learning to decode emotion and tone. “Chris Lucas and I have been successful in training machines to identify even complex human concepts like skepticism, and to learn how and why humans use them the way they do,” he said. “We’re getting closer and closer to being able to characterize and analyze human communication in its full richness.”