Show simple item record

Rendering Localized Spatial Audio in a Virtual Auditory Space

dc.contributor.authorZotkin, Dmitryen_US
dc.contributor.authorDuraiswami, Ramanien_US
dc.contributor.authorDavis, Larry S.en_US
dc.description.abstractHigh-quality virtual audio scene rendering is a must for emerging virtual and augmented reality applications, for perceptual user interfaces, and sonification of data. We describe algorithms for creation of virtual auditory spaces by rendering cues that arise from anatomical scattering, environmental scattering, and dynamical effects. We use a novel way of personalizing the head related transfer functions (HRTFs) from a database, based on anatomical measurements. Details of algorithms for HRTF interpolation, room impulse response creation, HRTF selection from a database, and audio scene presentation are presented. Our system runs in real time on an office PC without specialized DSP hardware. Also UMIACS-TR-2002-28en_US
dc.format.extent582151 bytes
dc.relation.ispartofseriesUM Computer Science Department; CS-TR-4348en_US
dc.relation.ispartofseriesUMIACS; UMIACS-TR-2002-28en_US
dc.titleRendering Localized Spatial Audio in a Virtual Auditory Spaceen_US
dc.typeTechnical Reporten_US
dc.relation.isAvailableAtDigital Repository at the University of Marylanden_US
dc.relation.isAvailableAtUniversity of Maryland (College Park, Md.)en_US
dc.relation.isAvailableAtTech Reports in Computer Science and Engineeringen_US
dc.relation.isAvailableAtUMIACS Technical Reportsen_US

Files in this item


This item appears in the following Collection(s)

Show simple item record