Music recommendation systems have been around for a while, last.fm, Pandora, Spotify, Peter Gabriel’s “The Filter” and more recently they have been extended into the social domain, just like it was in the days before mp3s and Napster when we used to make mix tapes for each other and recommend bands. But, one thing that all of the various systems have in common is that the software doesn’t understand the emotion inherent in the songs (other than in general genre terms).
Now, informatics expert Angelina Tzacheva and her colleagues at the University of South Carolina Upstate, Spartanburg, hope to remedy the situation by developing an algorithm that can extract the emotional qualities of a song from an audio file. Writing in the International Journal of Social Network Mining this month, they explain how they have trained their algorithm to recognize different timbres, types of instrumental sounds, commonly associated with specific emotions in a piece of music. In so doing they hope to bridge the gap between earlier attempts to detect emotions in music and the actual human perception of the feelings evoked in a specific musical work.
The team explains that, “We believe emotions are not something that is embedded within a digital signal, but is a feeling experienced by a human being.” They then ask “Is it possible for an emotion to be searched for and detected within a signal?” They find the answer that indeed it is. “Certain information is present within the signal, which can be linked to the emotion that is invoked within a human while listening to the music.” The team focuses on timbre as the bridge between the information and the emotion. Timbre being the characteristics other than the pitch or loudness of a musical sound.
The team suggests that their approach could be successfully applied in music recommendation systems allowing users to retrieve music and create playlists based on the types of emotion different music might invoke. Additionally, it might also be used commercially in radio and TV programming as well as in music therapy.
Tzacheva A.A., Schlingmann D. & Bell K.J. (2012). Automatic detection of emotions with music files, International Journal of Social Network Mining, 1 (2) 129. DOI: 10.1504/IJSNM.2012.051054