Structured Music Production: Exploring Creative Expression through Music and Audio Technology
The appeal of music is that it provides expression to our own emotions, and it follows that many individuals want some creative input to that expression. Historically the primary exposure to music has been through live performance, providing audiences an opportunity for interaction with the musicians and music, but today the vast majority of music is experienced through recordings. Though recent digital audio technologies have had a tremendous impact on the world of recorded music, its fundamental nature remains unchanged: once a recording is made, that single "performance" is forever fixed, preventing any true interaction with the listener.
This project integrates research in digital audio technology under the vision of transforming the act of listening to "recorded" music into an interactive experience in which the "performance" responds to the creative input of the listener. Central to this vision is a concept known as Structured Audio, a semantic object-based representation of sound. The project consists of two parts:
Structured Sound Modeling of musical instruments, which uses a novel signal processing and machine learning framework to facilitate a greater degree of musically expressive control than can be achieved with existing models for sound synthesis.
The Structured Audio Platform, consisting of devices and software that employ these new instrument models to provide an expanded artistic palette for music producers and interfaces enabling non-musicians to interact with their music by controlling creative aspects of the "performance".
This material is based in part upon work supported by the National Science Foundation under Grant No. IIS-0644151. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).