Rendering Localized Spatial Audio in a Virtual Auditory Space
Rendering Localized Spatial Audio in a Virtual Auditory Space
Files
Publication or External Link
Date
2002-04-04
Authors
Zotkin, Dmitry
Duraiswami, Ramani
Davis, Larry S.
Advisor
Citation
DRUM DOI
Abstract
High-quality virtual audio scene rendering is a must for emerging
virtual
and augmented reality applications, for perceptual user interfaces, and
sonification of data. We describe algorithms for creation of virtual
auditory spaces by rendering cues that arise from anatomical scattering,
environmental scattering, and dynamical effects. We use a novel way of
personalizing the head related transfer functions (HRTFs) from a
database,
based on anatomical measurements. Details of algorithms for HRTF
interpolation, room impulse response creation, HRTF selection from a
database, and audio scene presentation are presented. Our system runs in
real time on an office PC without specialized DSP hardware.
Also UMIACS-TR-2002-28