Details
Originalsprache | Englisch |
---|---|
Publikationsstatus | Veröffentlicht - 21 Aug. 2019 |
Veranstaltung | 2019 AES International Conference on Headphone Technology - San Francisco, USA / Vereinigte Staaten Dauer: 27 Aug. 2019 → 29 Aug. 2019 |
Konferenz
Konferenz | 2019 AES International Conference on Headphone Technology |
---|---|
Land/Gebiet | USA / Vereinigte Staaten |
Ort | San Francisco |
Zeitraum | 27 Aug. 2019 → 29 Aug. 2019 |
Abstract
Non-individual head-related transfer functions (HRTFs) are commonly used in binaural rendering systems because measuring personal HRTFs is difficult in consumer scenarios. However, the perceived externalization of such rendered virtual sound images is usually low, especially for frontal and rear sound sources. This study proposed a method for improving perceived externalization of virtual frontal and rear sound images presented over headphones, consisting mainly of direction-dependent peak and notch filters for direct sound parts, and filter bank based decorrelation filters for early reflection parts. The result of the subjective listening experiment showed that the perceived externalization of frontal and rear sound sources was substantially improved by our proposed method compared to the conventional method.
ASJC Scopus Sachgebiete
- Ingenieurwesen (insg.)
- Elektrotechnik und Elektronik
- Physik und Astronomie (insg.)
- Akustik und Ultraschall
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
2019. Beitrag in 2019 AES International Conference on Headphone Technology, San Francisco, USA / Vereinigte Staaten.
Publikation: Konferenzbeitrag › Paper › Forschung › Peer-Review
}
TY - CONF
T1 - Externalization Enhancement for Headphone-Reproduced Virtual Frontal and Rear Sound Images
AU - Li, Song
AU - Schlieper, Roman
AU - Peissig, Jürgen
N1 - Funding information: This work is supported by Huawei Innovation Research Program FLAGSHIP (HIRP FLAGSHIP) project. The authors would like to thank those who participated in the listening tests.
PY - 2019/8/21
Y1 - 2019/8/21
N2 - Non-individual head-related transfer functions (HRTFs) are commonly used in binaural rendering systems because measuring personal HRTFs is difficult in consumer scenarios. However, the perceived externalization of such rendered virtual sound images is usually low, especially for frontal and rear sound sources. This study proposed a method for improving perceived externalization of virtual frontal and rear sound images presented over headphones, consisting mainly of direction-dependent peak and notch filters for direct sound parts, and filter bank based decorrelation filters for early reflection parts. The result of the subjective listening experiment showed that the perceived externalization of frontal and rear sound sources was substantially improved by our proposed method compared to the conventional method.
AB - Non-individual head-related transfer functions (HRTFs) are commonly used in binaural rendering systems because measuring personal HRTFs is difficult in consumer scenarios. However, the perceived externalization of such rendered virtual sound images is usually low, especially for frontal and rear sound sources. This study proposed a method for improving perceived externalization of virtual frontal and rear sound images presented over headphones, consisting mainly of direction-dependent peak and notch filters for direct sound parts, and filter bank based decorrelation filters for early reflection parts. The result of the subjective listening experiment showed that the perceived externalization of frontal and rear sound sources was substantially improved by our proposed method compared to the conventional method.
UR - http://www.scopus.com/inward/record.url?scp=85074588719&partnerID=8YFLogxK
M3 - Paper
T2 - 2019 AES International Conference on Headphone Technology
Y2 - 27 August 2019 through 29 August 2019
ER -