A Perceptually Based Audio Signal Model with Application to Audio Transformations
Time and frequency are inherently coupled. Speedup the playback of a recording (time-domain) and the perceived pitch of the music will be higher than the original (frequency-domain). For many applications, changing time without changing frequency and vice versa is desirable. The focus of this talk will be on a perceptually relevant signal model that allows the decoupling of time and frequency. Augmenting the model parameters provides a means for a wide range of musically interesting transformations when regenerating the signal. The model assumes audio signals are composed of three basic components: sinusoids, transients, and noise. In addition to the signal processing methods to extract these parameters in a perceptually meaningful way, the history of such models and sound examples of various transformations will be presented.
Tony Verma is Director of Audio Technologies at Vidiator Technology Inc, where his work focuses on streaming media platforms tailored for wireless network environments. His previous and current research evolves around signal processing with an emphasis on audio applications. He is particularly interested in the intersection of science and art. He has spent many hours at the Center for computer Research in Music and Acoustics (CCRMA) at Stanford University where he obtained his Ph.D. in electrical engineering, finishing in 1999.