Matthew Prockup

Contact Me


  • MS Drexel University (Electrical Engineering)
  • BS Drexel University (Electrical Engineering)


I am currently a student pursuing my Phd. in Electrical Engineering at Drexel University. The overall scope of my work is to develop methods to better quantify the specific attributes we use to express ourselves through music. My work spans a wide scope of topics including audio signal processing, machine learning, and human computer interaction. I am also an avid percussionist and composer, having performed in and composed for various ensembles large and small.

Research Interests

Modeling musical attributes: When searching, sorting, and recommending music, humans apply a variety of attributes for similarly and discrimination. By designing audio features that capture these attributes, we can develop models that allow us to automatically generate descriptions of music that are grounded and intuitive.

Expression in music performance: Expression is the creative variation a human imparts on a piece of music to make it their own. In combining signal processing techniques, machine learning algorithms, and music performance practices, we attempt to quantify what makes a musical performance expressive and creative.

Development of interactive live performance systems: A large subset of musical performance requires a relationship between the performer and their audience. By creating interactive media technologies for musical performace, musicians can better provide contextually relevant information and create more intimate relationships with their audiences.

Developing interactive music and sound-based activities: Through the lens of music technology that we use every day, we can motivate and illustrate science, technology, engineering , arts, and math (STEAM) concepts in K-12 curricula.


    Modeling Genre with Musical Attributes: Genre provides one of the most convenient groupings of music, but it is often regarded as poorly defined and largely subjective. In this work we seek to answer whether musical genres be modeled objectively via a combination of musical attributes and if audio features mimic the behavior of these attributes. This work is done in collaboration with Pandora, and evaluation is performed using Pandora’s Music Genome Project® (MGP).

    Modeling Rhythmic Attributes: Musical meter and attributes of the rhythmic feel such as swing, syncopation, and danceability are crucial when defining musical style. In this work, we propose a number of tempo-invariant audio features for modeling meter and rhythmic feel. This work is done in collaboration with Pandora, and evaluation is performed using Pandora’s Music Genome Project® (MGP).

    Expressive Percussion: Percussionists alter tempo, dynamics, timbre, and excitation techniques in order to perform expressively. In collaboration with the Music Industry department at Drexel, we developed a sample library containing a wide range of percussion instrument articulations as well as a novel mobile tablet interface to easily navigate this expressive library. In addition to being a stand-alone sample library for performance, it acts as a large dataset of sound samples that can be used for experimentation.

    LiveNote: Orchestral Performance Companion: I am continuing further development on the iNotes Project in collaboration with the Philadelphia Orchestra. We have developed a system that helps users by guiding them through the performance using a handheld application (iPhone app) in real-time. Using audio features, we attempt to align the live performance audio with that of a previously annotated reference recording. The aligned position is transmitted to users’ handheld devices and pre-annotated information about the piece is displayed synchronously. [video] [official PhilOrch page]

    The Science of Jazz: This unique performance allows attendees to experience the science behind the music, augmented with large-screen visuals and an interactive iPhone app illustrating the principles of frequency, harmony, and acoustics. In 2012, acclaimed keyboardist Marc Cary lead his Focus Trio along with featured soloist and Grammy-Award winning percussionist Will Calhoun and other special guests in a tour de force of virtuoso jazz. [video]

    The AppLab @ ExCITe: In collaboration with the Pennoni Honors College, with founding support from Bentley Systems, the ExCITe Center welcomes the APP Lab. Open to all students across the university the APP Lab will serve as a resource for those interested in mobile app development, whether novice or advanced. The APP Lab will also sponsor periodic apps-related public events, University panels and demonstrations, student research and development projects throughout the year.

    Multi-touch Technology: I worked worked with a self-built multi-touch display, and focused creating applications and SDK's to enhance and streamline the user experience. I worked with pathologists at the University of Pennsylvania and the Children's Hospital of Philadelphia to create a multi-touch medical image slide viewer. Additionally, we worked towards creating a simple SDK for allowing non-technical developers to incorporate multi-touch into Adobe FLASH applications.


  • Prockup, M., Ehmann, A., Gouyon, F., Schmidt, E., Celma, O., Kim, Y., "Modeling Genre with the Music Genome Project: Comparing Human-Labeled Attributes and Audio Features." International Society for Music Information Retrieval Conference, Malaga, Spain, 2015. [PDF]

  • Prockup, M., Asman, A., Ehmann, A., Gouyon, F., Schmidt, E., Kim, Y., Modeling Rhythm Using Tree Ensembles and the Music Genome Project. Machine Learning for Music Discovery Workshop at the 32nd International Conference on Machine Learning, Lille, France, 2015. [PDF]

  • Prockup, M., Ehmann, A., Gouyon, F., Schmidt, E., Kim, Y., "Modeling Rhythm at Scale with the Music Genome Project." IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, New York, 2015. [PDF]

  • Prockup, M., Scott, J., and Kim, Y. "Representing Musical Patterns via the Rhythmic Style Histogram Feature." Proceedings of the ACM International Conference on Multimedia, Orlando, Florida, 2014. [PDF]

  • Prockup, M., Schmidt, E., Scott, J. & Kim, Y. Toward understanding expressive percussion through content based analysis. Proceedings of the 14th International Society for Music Information Retrieval Conference. Curitiba, Brazil. 2013 [PDF]

  • Schmidt, E. M., Prockup, M., Scott, J., Dolhansky, B., Morton, B. G., and Kim, Y. E. (2013, in press). Analyzing the Perceptual Salience of Audio Features for Musical Emotion Recognition. Computer Music Modeling and Retrieval. Music and Emotions.

  • Prockup, M.; Grunberg, D.; Hrybyk, A.; Kim, Y.E., Orchestral Performance Companion: Using Real-Time Audio to Score Alignment. IEEE MultiMedia , vol.20, no.2, pp.52,60, April-June 2013

  • Schmidt, E. M., Prockup, M., Scott, J., Dolhansky, B., Morton, B. and Kim, Y. E. (2012). Relating perceptual and feature space invariances in music emotion recognition. Proceedings of the International Symposium on Computer Music Modeling and Retrieval, London, U.K.: CMMR. Best Student Paper. [PDF] [Oral Presentation]

  • Scott, J., Schmidt, E. M., Prockup, M., Morton, B. and Kim, Y. E. (2012). Predicting time-varying musical emotion distributions from multi-track audio. Proceedings of the International Symposium on Computer Music Modeling and Retrieval, London, U.K.: CMMR. [PDF]

  • Batula, A. M., Morton, B. G., Migneco, R., Prockup, M., Schmidt, E. M., Grunberg, D. K., Kim, Y. E., and Fontecchio, A. K. (2012). Music Technology as an Introduction to STEM. Proceedings of the 2012 ASEE Annual Conference, San Antonio, Texas: ASEE. [PDF]

  • Scott, J., Dolhansky, B., Prockup, M., McPherson, A., Kim, Y. E. (2012). New Physical and Digital Interfaces for Music Creation and Expression. Proceedings of the 2012 Music, Mind and Invention Workshop, Ewing, NJ: [PDF]

  • Prockup, M., Batula, A., Morton, B., Kim,Y. E (2012). Education Through Music Technology. Proceedings of the 2012 Music, Mind and Invention Workshop, Ewing, NJ

  • Scott, J., Prockup, M., Schmidt, E. M., Kim, Y. E. (2011). Automatic Multi-Track Mixing Using Linear Dynamical Systems. Proceedings of the 8th Sound and Music Computing Conference, Padova, Italy: SMC. [PDF]

  • Kim, Y. E., Batula, A. M., Migneco, R., Richardson, P., Dolhansky, B., Grunberg, D., Morton, B., Prockup, M., Schmidt, E. M., and Scott, J. (2011). Teaching STEM concepts through music technology and DSP. Proceedings of the 14th IEEE Digital Signal Processing Workshop and 6th IEEE Signal Processing Education Workshop, Sedona, AZ: DSP/SPE. [PDF]