I'm currently a software engineer at Google Brain. While at Google I've worked on noise robust speech recognition and music recommendation, among other things. Previously, I was a postdoc working on music information retrieval with Juan Bello at MARL at NYU. Earlier still, I was a graduate research assistant working with Dan Ellis in the Laboratory for the Recognition and Organization of Speech and Audio (LabROSA). I defended my dissertation in May 2009 (watch me write it at about 50,000 * real-time here).

    My research interests lie at the intersection of audio signal processing and machine learning. My dissertation research was devoted to model based source separation, but I also found time to do a bit of music signal analysis to create some wacky remixes on the side. I've also done some work on music information retrieval. You can find more (outdated) information on my projects page.

    You might also be interested in some of my freely available code, including assorted Python audio processing modules, and useful Matlab tools for functional programming, easier plotting, training GMMs/HMMs, and interfacing with HTK. I've also spent some time hacking on the Gordon music database and scikit-learn.