Despite spending countless hours of her PhD at Stanford making visits to remote stretches of Alaska, poring over yellow cedar measurements and photos, and ultimately publishing her findings, Lauren Oakes was about to experience her data in a new way.
Driving for a weekend trip to the Sierras, she turned the volume way up in her car and hit play. A cascading piano was joined by a flute, cello, and other instruments. As the piece continued, the piano’s high staccato notes gave way to lower, more intermittent ones before ending on a wave of strings, leaving a sense of another movement yet to be written.
Oakes had just heard the sound of climate change in Alaska’s yellow cedar forests and the ways it has already altered the landscape. It wasn’t just a composer’s impression of her research, though. She had just heard her data — data meticulously collected and pored over for years — translated from numbers and charts into music.
“To hear the patterns it took me years to understand was incredible,” she said.
The piece has the potential to change how researchers and the public engage with data. Music based on data has the potential to reveal new patterns to scientists and get data out of the arcane language of empirical orthogonal functions, p-values, and Kruskal-Wallis tests and into a language that everyone can understand.
The research Oakes had just heard came courtesy of Nik Sawe, a fellow Stanford PhD student at the time the music was created and a current researcher there. He had emailed a group of fellow students at the university hoping to find some data to turn into music after going to a talk about using a technique called data sonification to make music from seizure data.
“When you look at the readout that a doctor could analyze, it looked like noise,” he said. “But when you hear the stuff with one speaker playing the healthy side of a brain and one playing the afflicted side, you can hear the difference with this structured noise.”
If it worked for medical data, Sawe thought it could work for environmental data as well. He had written a computer program that essentially reads data as sheet music, much like a player piano.
And Oakes’ work presented a compelling piece. There were multiple types of trees in the forest and a clear progression of climate change killing off yellow cedars. Rising temperatures are decimating snowpack, but when still frequent cold snaps hit, there’s not enough insulation to protect the cedar’s shallow roots, so they die.
It’s an odd scenario — death by freezing in a warming world — but one that could have profound impacts on one of the most culturally and economically important trees in Alaska as it dies out and other, less valuable trees take its place.
“Culturally, they’ve been used for about 9,000 years in carvings,” Oakes, now a lecturer at Stanford, said. “From an economic standpoint, they are the most valuable conifer in Alaska. Even though right now they comprise a lower percentage of the forest in terms of density, when there is a sale for timber in Alaska, they tend to drive it.”
That’s why Sawe picked up Oakes’ data and turned it into tunes. Though a computer played the music, Sawe helped arrange the piece so it made sense. He assigned different trees to different instruments based on their role in the forest (though in the case of sitka spruce, he assigned it to the cello because it’s a common wood used in cello construction) and a key so all the players were on the same page (in this case, a rather foreboding D minor).
Each note in the piece is a single tree from one of Oakes’ study sites while its pitch conveys the age and loudness conveys its size. All the parts are played by a computer using a Musical Instrument Digital Interface, known more frequently by its less wonky acronym MIDI.
Together, the piece conveys a forest in flux. Sawe also isolated the piano as a solo piece to highlight what’s happening to yellow cedars in particular. In that context, the lively tinkle of notes reminiscent of Philip Glass slips into a dirge by the end as gaps of silence and single notes dominate the piece.
Sawe isn’t a composer by trade — he studies how why we make decisions on the environment using a mix of neurology and economics — but he is someone who wants to take complex data and make it understandable.
“With data sonification, you can handle a lot more dimensions if you’re listening to data than looking at it,” he said. “It’s useful for scientists on the one hand but on the other hand, the fact that you can take something like the data from 2,000 trees in Alaska and give someone a 20-second description of what that song is portraying and they pick it up [means] it has huge potential to share these narratives with people.”
For Oakes, that’s exactly what she was hoping for when she responded to Sawe’s initial email. She wanted her data to be so compelling that people would have to stop and pay attention to it.
The early feedback indicates the project has already realized some of that potential. The California Academy of Science has reached out to them about a public event and Stanford has expressed interest in having a chamber music group do a live performance of the piece. And Sawe has started working with the Monterey Bay Aquarium Research Institute to explore some of their Pacific Ocean data for another data sonification project down the road that could add another song to the soundtrack of climate change.
While data sonification is still far from the mainstream scientific process, music could be a lynch pin for taking climate research out of the pages of academic journals and into our lives. And it may serve as a reminder that we’re all composers and our choices will define what the next movement sounds like.