Archived Research Projects

Exploring Creative Expression through Music and Audio Technology


The basic process of music recording and production has remained the same for the past 40 years. This research project makes use of structured audio, a novel format that can take full advantage of the capabilities of digital music devices and offers the opportunity to dramatically improve the music production and listening experience.

Music Video Game Study


This research project develops and implements a one-year longitudinal evaluation of the impact of musical video games that specifically investigates: how game proficiency impacts musical skill development, and how avid game play leads to the pursuit of other music making outlets and formal music education.
This one-year research project is supported by grant from the NAMM Foundation .

Combined Audio and Video Analysis for Guitar Chord Identification


Stringed instruments, such as the guitar, add extra ambiguity to chord identification, as a single chord or melody may be played in different positions on the fretboard. Preserving this information is important, because it signifies the original fingering, and implied "easiest" way to perform the selection. This chord identification system combines analysis of audio to determine the general chord scale (i.e. A major, G minor), and video of the guitarist to determine chord voicing (i.e. open, barred, inversion), to accurately identify the guitar chord.

MoodPlaylists


This research project explores a method of automatically generating playlists designed to change a listener’s mood. It makes use of the data collected by MoodSwings .

Efficient Acoustic Feature Computation Using FPGAs


Traditional methods of acoustic feature computation are often expensive and space and power inefficient. We hope to utilize the massively parallel architecture of the FPGA to achieve the processing power of a cluster at a fraction of the cost.

Beat Sync Mash Coder


This is a new tool for semi-automated real-time creation of beat-synchronous music mashups. Automatic synchronized playback using phase vocoder and beat tracker technology allows the user to focus on clip selection, making it easy to create dynamic, intricate and musically coherent soundscapes.

Online Activities for Learning and Listening


Web-based, cross platform games designed to illustrate pscyhoacoustic concepts to K-12 students and collect human evaluation data. These games introduce players to problems in speaker identification and musical timbre recognition through interactive interfaces that encourage learning through experimentation.

Intelligent Jazz Accompaniment


This research develops a system that provides an intelligent piano accompaniment in real-time, responding sensitively to a live jazz trio.

Joint Voice Identification and Separation


This research explores the performance of an automatic speaker identification system using the voice as a biometric identifier. This project is a collaboration with Dr. John MacLaren Walsh.

Gesture Recognition for Conducting Computer Music


Researching the enabling of computers to identify and respond to gestures. Success will open up a broad new communications channel for manipulating electronics and digital audio.

Adaptive Physical Interfaces for Digital Music


Using digital sampling and/or synthesis, an infinite number of sounds can be created. In order to play and control these sounds musically, the must be a physical interface that one "plays." This project will research the newest digital tools for digital sound control, develop new applications for traditional instrument forms, and design adaptive physical interfaces that are more flexible than those based on conventional instruments.

Real-time Audio-to-Score Alignment (a.k.a. Score Following)


This research develops an algorithm that a computer would use to keep track of the progress of a live performance in reference to a musical score or a previous recording of the same composition.

Please visit our DrexelCast project page for the most recent updates.

Music Similarity


This project explores a method that uses a computer algorithm to assess song similarity, which is based solely on the sound waveform. By extracting acoustic features that represent the timbre, or sound quality, similarity is assessed by comparing the quantitative distance between features. Our system is evaluated by comparing the algorithm results to responses from human subjects.

Acoustical analysis of Mitchell Auditorium


How do we determine the best sounding seat in the house? We decided to investigate the characteristics of Mitchell Auditorium in the Bossone Research Center, particularly focusing on the resonance frequencies, so that we could determine where sound was distorted the least.

Voice transfer over OLPC mesh networks


The One Laptop per Child (OLPC) project, also known as the “$100 laptop”, aims to make available groundbreaking computing technology within an extremely limited budget.Our objective is to develop a live voice communication application for the OLPC's model XO that will enable a high degree of interaction between laptop users at a distance.

Automatic identification and classification of music


Training computers to identify the performing artist and the genre of a song from only the audio signal.

Singing voice analysis/synthesis


The singing voice is the oldest and most variable of musical instruments. But the acoustic flexibility of the voice in intimating words, shaping phrases, and conveying emotion also makes it the most difficult instrument to model computationally. Moreover, each voice possesses distinctive features independent of phonemes and words. This project investigates a signal model of the singing voice that attempts to capture the salient features of voice identity. This model allows for audio data reduction, singer identification, and even voice transformation.