10 ENE. 2023 · What if every single note of a song could represent information and communicate ideas like lyrics do? What if we could give data a ‘voice’?
Think about that one song that makes your heart sing or moves you to tears. Or the last time you were in a movie theater and the sound effects had you on the edge of your seat. Sound - and by extension music - has the power to affect our mood, help us learn and even change our perspective on reality.
So when it comes to the intersection of music, information, and technology, it’s not a huge surprise that data sonification is now on the rise. Data sonification is the process of using non-verbal audio to communicate information. In other words, it transforms data into sound, helping us understand information faster and on a deeper, sometimes emotional level.
Just like a movie’s sound effects and soundtrack, data sonification can have a huge impact on our mood and our ability to grasp concepts more quickly. Imagine being able to hear your WiFi network, the communicative signals sent between a network of mushrooms - or even a pandemic. Believe it or not, that’s exactly what data sonification enables you to do.
So what are the practical applications of data sonification in the fields of education, health, and even business? How is it already being used to reach wider audiences and create inclusivity in science? Where does the line between science, art, and music lie, and is there a target market for this kind of art?
Join us in our latest podcast episode as we interview sonification experts Professor Paolo Ciuccarrelli (Northeastern Univeristy) and Research Scientist Sara Lenzi (Critical Alarms Lab, Delft University of Technology), as well as Degen Blues creator Andy Szybalski (also co-founder of Uber Eats and Google Street View) to discuss all these questions and much more.
The Mix is a Musixmatch Pro podcast.
Original music for The Mix Podcast was written and produced by Pierfrancesco Melucci.
This episode also contains audio sampled from "Hear Climate Data Turned Into Music" by Chris Chafe of the Center for Computer Research in Music and Acoustics at Stanford and Degen Blues.