Click here for our related research on music similarity
Research Day Poster April 25, 2006
By: Donald S. Williamson and Sridhar Pilli
Advisor: Dr. Youngmoo Kim
The explosion of mp3s and iPods has created an overabundance of musical choices, and now the act of organizing one’s music has become tedious and time-consuming. Computers, however, still relate to sound as an abstract series of numbers and not as humans do: as instruments, performers, and songs. The goal of this project is to use signal processing and machine learning methods to automatically identify the artists and musical genres within a collection of songs to enable easy and accurate organization. In our system, we employ acoustic features to model the time-varying spectral characteristics of each song. These acoustic features are passed to a machine learning classification system, which uses statistical patterns to categorize songs. In a pilot study using music from 18 different artists, the classification system was able to identify the correct performing artist for 72% of the songs. In another experiment using 57 songs from 5 different genres (Gospel, Pop, R&B, Rap, and Rock), our system was able to identify the correct musical genre for 60% of the songs.
Research Day Poster